Introduction to AZ-204 Certification
- Designed for developers working with Microsoft Azure.
- Covers deploying web apps, Azure Functions, application integrations, and Azure API programmatic interactions.
- Recommended path: Start with AZ-900, then AZ-104, followed by AZ-204.
- Exam difficulty is higher compared to AWS and GCP developer certifications.
- Study duration varies from 20 to 50 hours based on experience.
Azure Functions and Serverless Architecture
- Serverless services are fully managed, scalable, abstract infrastructure.
- Azure Functions are event-driven, with triggers and bindings to integrate with input/output services.
- Azure Functions require a storage account for state and code management.
- Authorization levels: anonymous, function (API key), admin keys.
- Debugging available via log streams and live metrics.
- Hosting plans: Consumption (serverless, scales to zero), Premium (pre-warmed), Dedicated (App Service Plan).
Azure Virtual Machines (VMs)
- Highly customizable with choice of OS, compute, memory, and storage.
- VM size and availability linked to image type selected.
- Three connection methods: SSH (port 22), RDP (port 3389), Azure Bastion (browser-based).
- VM management includes update management for OS patches.
- Differentiation between Linux and Windows VMs for licensing and resource needs.
Infrastructure as Code with Azure Resource Manager (ARM) Templates
- ARM templates are JSON-based declarative scripts for provisioning Azure resources.
- Template components: schema, parameters, variables, functions, resources, outputs.
- Support modularity, testing, validation, deployments tracking, and integration with CI/CD.
Azure Containers
- Azure Container Instances (ACI): fully managed Docker container execution.
- Supports Linux and Windows containers with quick provisioning.
- Container Groups group related containers on same host.
- Azure Container Registry (ACR): managed private Docker registry for container images.
- Supports image build automation and integration with Azure DevOps and other pipelines.
Azure App Services
- Platform as a Service (PaaS) for hosting web apps, APIs, and mobile backends.
- Supports multiple runtimes: .NET, Java, Node.js, PHP, Python, Ruby (partial).
- Supports custom container deployment.
- Features deployment slots for staged deployments with traffic swapping.
- Scaling options include manual scale-up and auto scale-out based on metrics.
- Supports WebJobs for background processing in Windows environments.
Azure Storage Options
- Types include Blob, File Shares, Queues, Tables, Disks.
- Performance tiers: Standard (HDD) and Premium (SSD).
- Access tiers for Blob storage: Hot, Cool, Archive based on access frequency.
- Storage Explorer and AZCopy CLI tool facilitate data management.
- Lifecycle management enables automated data tiering and expiration rules.
Azure Active Directory (AAD)
- Cloud-based identity and access management service.
- Supports users, groups, managed identities, and guest users.
- Editions vary: Free, Office 365 Apps, Premium P1 and P2 with advanced features.
- Supports multi-factor authentication (MFA), password reset, and role-based access.
- Integrates with on-premises AD using Azure AD Connect.
Azure Key Vault
- Secures cryptographic keys, secrets, and certificates.
- Supports hardware security modules (HSMs) compliant with FIPS 140-2.
- Enables key management, rotation, backup/restore, and access control.
- Supports certificate lifecycle management and integration with certificate authorities.
Azure Monitor and Application Insights
- Azure Monitor centralizes telemetry data collection, analysis, and alerting across applications and infrastructure. For an in-depth understanding, see Understanding Cloud Computing: A Comprehensive Guide to AWS and S3.
- Provides metrics, logs, and traces for observability.
- Application Insights focuses on application performance monitoring with out-of-the-box language SDKs.
- Supports manual and auto-instrumentation.
- Features include usage analytics, custom event tracking, sampling to optimize costs, dashboards, and workbooks.
Azure API Management (APIM)
- Manages APIs including securing, versioning, documenting, and analytics. To expand knowledge on API management concepts and best practices, refer to Unlocking Azure DevOps with the A400 Certification: A Comprehensive Guide.
- Components: APIs, products (API groups), developers, policies, and gateway.
- Supports various API definitions: manual, OpenAPI, WADL, WSDL.
- Provides powerful policies for authentication, caching, transformation, and throttling.
- Developer portal allows API consumers to explore and test APIs.
Event-Driven Services
- Event Grid: serverless event routing service for reacting to resource changes with custom or system topics.
- Event Hub: big data streaming service for high-throughput event ingestion and processing.
- Azure Service Bus: enterprise messaging service supporting queues (point-to-point) and topics (pub-sub).
Redis and Azure Cache for Redis
- Redis: open-source in-memory data structure store used for caching and fast data access.
- Supports strings, lists, sets, sorted sets, hashes, bitmaps, streams, etc.
- Azure Cache for Redis is a managed Redis offering integrated with Azure for high performance and scale.
- Common use cases include session storage, caching, real-time data and queues.
This study guide integrates essential concepts, capabilities, and practical advice extracted from the AZ-204 course content to aid candidates in mastering Microsoft Azure development services, preparing for the certification exam, and applying Azure skills in real-world scenarios. For a broader certification preparation strategy, you might also find the Ultimate Guide to Azure DevOps Certification Course: Pass the Exam with Confidence useful.
hey this is andrew brown your cloud instructor exam pro bringing you another complete study course and this time it's
the microsoft azure developer associate also known as the az204 made available to you on freecodecamp this course is
designed to help you pass and achieve microsoft issued certification and the way we're going to do that is by doing
lecture content follow alongs a practice exam and have cheat sheets for the day of the exam so you can prove on your
resume your linkedin you have that as your knowledge you can go get that cloud job or get that promotion and tell you a
bit about me i was previously the cto of multiple ed tech companies 15 years industry experience five years
specializing in cloud i'm a nativist community hero and i've published many many free courses just like this one i
love star trek and coconut water and i just want to take a moment here to thank viewers like you because it's you
that make these courses possible and the best way to support more courses just like this one is to purchase the
additional study materials at az204 so you can get access to study notes flash cards quizlets downloadable lecture
slides downloadable cheat sheets practice exams ask questions get learning support but if you sign up
right now for free you'll get a practice exam and cheat sheet there's no credit card required the free stuff does not
have a trial limit so you do not have to worry about it vanishing it's not a demo uh if you have if there are updates to
this course they'll be in the youtube descriptions on free code camp so watch out if there are corrections additions
modifications to make sure you're using the latest version of this course and if you want to keep up to date on upcoming
courses the best way to find that is on twitter i'm at andrew brown on twitter and if you do pass i love to hear uh you
know your success story um or what you'd like to see next so there you go hey this is andrew brown from exam pro
and we're at the start of our journey asking the most important questions first which is what is the azure
developer associate so it's a microsoft certification about the azure platform focused on multiple ways to deploy web
apps to azure a deep dive into azure functions i mean seriously deep a broad look into application integration
services which azure has a lot of to the point where they have ones that cover the same purpose and lots of
hands-on with azure cli sdks or programmatic interactions with azure services so anything to do with the
azure api the course code for the azure developer is the ac204 not to be confused with the 104 but it is very
complementary to the other course which we'll talk about when we talk about the roadmap and i do want to point out that
microsoft azure is a very code and script driven uh platform compared to aws and gcp so it's better having
developer knowledge and i actually consider it a must when we're working with microsoft and i'll explain why here
in a bit so who's the certification for we'll consider the 204 if you are a web app developer looking to pick up cloud
skills or transition to a cloud developer role you are a cloud developer cloud engineer who needs to integrate
azure services into your app or deploy your app to azure or you need to learn a lot about azure functions to build
service workloads but as a note i kind of feel that this is an essential certification at the associate level and
so looking at a roadmap i always suggest people to start with the az900 which is the fundamentals because it's going to
really help you get not just introduced to azure but also the test taking
experience because it is a much difficult and different experience than the other two providers
and generally from that we usually recommend to go with the administrator the 104 because that is a broad amount
of services and that is the most common use case why people are on microsoft azure because they're usually i.t dev
shops but as a complimentary usually after that i would recommend to go to the
developer the az204 because the nature of azure is that a lot of the ui it like is script driven so when you are in the
portal uh you have to touch script more often than not or you might run into features that simply do not exist in the
portal so you have to use scripting so really to be a proficient pers person working on azure you need developer
programming experience and so i always pair these two together now where you want to go after that is up to you a lot
of people like to go the solutions architect or the devops engineer expert there's a bunch of other
associates and i consider these kind of like uh like mid-level specialties where you can do those there's of course more
certifications than we're looking at that is actually here but you know these are the most general
ones here so you know hopefully that gives you an idea that you should probably take the developer after your
associate and then after that you go wherever you want in terms of difficulty the azure
developer associate is i would say two times harder than the aws developer associate and three times more difficult
than the gcp associate and i'm talking about the exams not necessarily the application
of being a developer in the platform but just the fact that the way azure makes their exams is they really want to test
for practical knowledge so it's not about conceptual or strategic information not to say that there isn't
those kind of questions on exam but it's a huge focus on do you know how to actually set things up and do you know
the nuances of them so you're going to see a lot more labs than normal in my courses for this particular study course
how long does it take to pass well it depends on you but to give you a general idea if you're a beginner so let's say
you had the az 900 but not much hands-on experience you've never written code or attack role then you're looking 50 hours
even if you've taken the easy to 104 it's still going to take this long because it's a different uh different
beast than the 104. if you're experienced so you already have practical knowledge working on azure uh
you've deployed apps to azure you have a strong background in programming you're looking at 20 hours and so i like to uh
set a goal of somewhere in between so 30 hours average study time so for lectures in lab that's going to uh take up 50 of
your time so you know we're looking at it's probably more uh labs and lecture and then the other part of it is 50
practice exams so i recommend you study one to two hours for 20 days and really do spread that out because if you do too
much of it together uh you have a a good chance of forgetting information so you know make
sure you spread that out and make sure that it's it becomes part of your long-term memory so where do you take
the example you can do it in person or test center or online from the convenience of your own home so
microsoft delivers exams via two different uh providers psi online pearson vue and those are online proctor
exam systems both of these providers have their own test center networks so whether you want the online experience
or in person it's just to be up to you i strongly recommend if you can to go in person it's less stressful if you have
the opportunity if you can't you have to do your home it's up to you which one you want to choose they're more or less
the same i like pearson vue but some people like psi so
it just depends if you take a couple you'll decide which one you like and if you're wondering what the word proctor
means it means there's a supervisor somebody is monitoring you as you taste taste uh take the exam to make sure
you're not cheating that's the whole idea behind it to make sure these are legitimate uh scores so what does it
take to pass example you're going to have to watch video lecture memorize key information you have to do hands-on labs
and follow along within your own account you i would recommend paid exams to simulate the real exam i'm going to help
you out by giving your first exam for free no credit card required just sign up on exam pro and um you know you can
go get access to that and other free additional content that i strongly recommend that you do
for the content outline we got five domains each domain has its own weighting that determines how many
questions in a domain will show up and azure is interesting because they do a range of questions so it's not a
guaranteed of a certain amount of questions on exam it's going to be between ranges so the first developer or
sorry develop azure compute solutions develop azure storage implement azure security monitor troubleshoot and
optimize azure solutions connect and consume to azure services and third-party services in terms of the
grading you've got to get about 70 percent to pass and they use scaled scoring so
uh you know it's not always exactly on the dot but you for the most part if you get 70 percent you should pass in terms
of um amount of questions is between 40 to 60 you probably see 55 questions so you can afford to get about 12 to 18
wrong some questions are worth more than one point there's no penalty for wrong questions some questions cannot be
skipped and for the formatted question you've got multiple choice multiple answer drag and drop hot area case
studies all sorts of kinds of questions that you'll encounter much more difficult than the az 900 for sure
in terms of the duration you get three hours so that's about one minute per question but of course different
question formats are going to be different uh you have 180 minutes for the exam time your seat time is 210. so
you have about um 30 minutes um in terms of the whole time that you said schedule so even if you have 180 minutes you have
to consider the entire time to get logged in and all that other stuff so time to review instructions reading
accept the nda complete the exam provide feedback at the end and i'm telling you if you're taking this online
show up early because so often you're fiddling around with your license to try to get to scan properly and then they
don't like it so you have to scan it twice so you know if you can show up an hour early and make sure you block that
time this exam is valid for i believe two years before recertification so you know
there you go [Music] hey this is andrew brown from exam pro
and before we dive into azure functions we need to understand what is serverless so serverless architecture generally
describes fully managed cloud services and i say generally because um you know that definition of serverless can be
highly contested about what can be serverless and what can not be serverless so it's not a boolean answer
of yes or no but some services are more uh serverless than others to a degree and so you know the way to help you
understand serverless is the way i define it is i look at the following characteristics one thing for a
cloud service to be serverless is that it should be highly elastic and scalable highly available highly durable and
secure by default another thing is that it abstracts away the underlying infrastructure and are built based on
the execution of your business task so you're not really worried about how many cpus and things like that maybe they are
abstracted into like virtual cpus or away from what the original
hardware software is using the idea is that it's a simpler value then the idea is that serverless can scale to zero
meaning that when it's not in use you're not paying for anything and the most important thing is you're paying for
value so you do not pay for idle servers and just to kind of reinforce that idea that um
that uh it's a degree uh my friend daniel who's really big in the service loves to describe it as like the energy
rating system so the idea is that uh when you go to buy an appliance such as um you know a washer or dryer they'll
tell you how energy efficient it is and that's kind of the idea behind serverless services and uh you know
we're going to be talking about function as a service but function of service does not necessarily guarantee that it's
a serverless service but we'll talk about that in the next slide okay [Music]
all right let's talk about function as a service also known as fast so here's kind of a diagram that i like to use to
visualize it but we'll get to it in a moment here so what is function as a service it allows developers to focus on
just writing pieces of codes also known as functions and it has event driven integration trigger functions based on
event data or to emit event data so it's not just a matter of having a piece of function or code that you write in a box
but the fact that it has to be event driven generally multiple functions are
orchestrated together to create a serverless application sometimes also known as microservices functions
generally only only run when needed and so function as a service is not serverless on its own
so fast function as a service it's only serverless if it's fully managed and it scales to
zero just to take a closer look at this graphic here so the idea is that if we're talking about the underlying
infrastructure it's of course running on some kind of physical server and from there you'll have a host
operating system and then you it could be a hypervisor and then from there you could have a
virtual machine running a container runtime like the docker daemon um or you know it could be
there are some os's that are optimized to run a container runtime without a hypervisor
but the important part is the idea is that you're just deploying these little pieces of functions and they generally
will go into a container so you might have a container runtime
that is specifically configured for ruby or it might be configured for java or python or net and then the idea is
that you're dropping that code and that code's getting scheduled into that container
but hopefully that gives you kind of the general idea of what function as a service is
[Music] hey this is andrew brown from exam pro and we are taking a look at azure
functions and this is a function as a service fast offering that allows developers to focus on writing code and
not worry about maintaining the underlying computing infrastructure and so here is our
visualization so we can kind of break down the anatomy of how azure functions work so the first thing you'll need is
an function app and this defines the underlying compute for a collection of functions so a function act defines the
hosting runtime and global configurations then you have the functions themselves these represent
code along with application runtime configuration so the idea is you can say i want this to be a python function a
net function etc etc there's always going to be a trigger so a trigger is the chosen event
data that will cause the function to execute and you can only have one trigger
you have input bindings these are one or more multiple data sources that will be passed to the function when a trigger
occurs so the idea is you can pull in data from a variety of different azure services at the time of trigger which is
uh quite nice and then you have the output bindings these are one or more data syncs that
will receive outputted data from the function on successful execution they say syncs you could say consumers if you
like and also azure functions at least as of today has four different versions you
really do want to just use the latest version there could be a version 5 out by now i
don't know but you know just be aware that you're always using the latest version but
in practicality you probably won't be able to tell the difference between the versions but just make sure you're using
the latest okay [Music] all right let's take a look at storage
considerations for azure functions because every function app requires a storage account to operate and if that
account is deleted your functions are just not going to work so kind of a small visualization of your function app
linked to a storage account so azure functions uses the following storage types in the storage account it's going
to vary based on use case so for blob storage it maintains binding state and function keys for azure files file share
used to store and run function app code in a consumption plan premium plan as your files is set up by default but you
can create an app without azure files under certain conditions for cue storage this is used by task tubs in durable
functions we have a little section on durable functions and then for table storage used by task hubs and durable
functions as well so there you go [Music] okay let's take a look here at the
anatomy of a single azure function so here is a screenshot of visual studio code because this is where you're going
to be writing your functions you can write them somewhere else i sure don't know where but
they have really strong integrations with visual studio code but the idea here is that we have a
visual studio code plug-in installed that allows us to manage our remote functions and we have
local projects but let's take a look at some of the files that are in here so we can understand what kind of files matter
to our functions the first is functions.json this is configuration of a single function defining bindings and
we talk a lot about bindings uh in the azure functions section of this course there's the code itself in this case
it's a javascript file we have the func ignore that's just like a git ignore file but it's to ignore files like files
to be not packaged right so like when it actually deploys the function you don't want those files included but
you might use them in local development we have host.json this is global configuration of all functions at the
function app level then there's the local project itself this is just where the function is
stored locally and a lot of times you're moving that code to uh to the remote storage on azure and if we were just to
open that up you can kind of see the same files being mirrored so we have the name of the folder so host json function
json index etc local settings json package json things like that
so i'll just kind of erase a little bit of this out of here but you get the idea and it'll make a lot more sense when we
start making functions which we absolutely do in this course critical for the az204 so
but we'll do that soon [Music]
all right let's take a look at authorization levels this determines what key if any needs to be present when
making a request when a function is getting invoked and the authorization levels can be one of the following we
have nominus function and admin so anonymous means that anybody can invoke the function so if it's an
https request in order to trigger it anybody can trigger it
for function you'll need a specified api key this is the default value when you're creating your functions generally
recommended as the means to use or you can have it so that it has to use a main key
to be required yes it's called master key i'm just going to use the word binky and for integration if you're trying to
figure out how to change the authorization levels after you've deployed a function you're just
going to click on your https trigger and from there you can switch the authorization level now that's not
always the case there's some cases where you're not able to change it via the portal
um and that just has to do with you know the type of runtime you're running whether it's a custom run time
but yeah that's where you're going to do it there and you can change it after the fact after creation okay
[Music] let's talk about how we would debug our azure function well you have this
ability to turn on uh streaming logs and this allows you to see near real-time logging when errors occur and it's not
as simple as that because there are two options for streaming a file log the first is built-in logging stream
and so the idea is that you have the app service platform lets you view a stream of your application log files and then
there's live metrics stream so when your function app is connected to application insights you can view log data and other
metrics in near real time in the azure portal using live metrics stream and log streams can be viewed both in the portal
and in most cases the local developer environments such as visual studio code now when i
was doing this i was trying to turn it on and i did not get any logs in near real time but i think the reason why was
because the hosting that i had it backed on wasn't on app service platform so i think it really does matter where you
host it um and so you know maybe we'll explore that in the follow along to see if we can see your real-time logs
but yeah there you go [Music] all right let's take a look at the key
concepts for azure functions these are not exactly my words but i will uh provide exceptions here as we describe
this stuff but the idea here is that we want to get broader knowledge about azure functions so azure functions are
lightweight and can be serverless and they can be and this is only going to be dependent on how you host them and we'll
talk about that in a moment azure functions are easier to write and deploy usually write deploy than
full web applications uh that can be true the idea though is that you are uh you might have to
do less coding in some regards but then you might have to do more in other places
like application integration so i would say that it shifts the responsibilities of what you have to
work on but you it is better i think azure functions are fast execute because there is no at large application startup
time initialization or other events fired before the code is executed now true that large applications there are
those downsides to them but the the trade-off here is that if you're using serverless functions then you will be
dealing with cold starts um and so cold starts just means that uh you know
when you're when you have a function that hasn't been used in a while that's going to have to boot up an environment
so you know there is that trade-off now in many cases you can
pre-warm or have continuously running compute but if you do that then you're not
necessarily leveraging serverless like to be able to scale to zero but you know you do get trade-offs azure functions
execution is triggered when an event is fired that is true azure functions do not need any infrastructure and it has
zero maintenance well they do have infrastructure it's just that you don't generally have to worry about the
infrastructure in terms of azure's implementation of functions you do have to think a little
bit more in general about the infrastructure underneath as opposed to google cloud or aws but at least you
have a lot of options on azure and it has zero maintenance well the infrastructure that's true but um
the your code itself you know if you're using python 2 and microsoft says hey we're going to expire or uh deprecate
the python 2 runtime you have to use python 3 well then you're going to have to
upgrade that stuff so that is your maintenance so there is management code but
that's no different than any other provider azure functions can be built tested deployed in the azure portal
using a browser only if you're using windows so uh if you are hosting on linux there are definitely
lots of limitations in the azure portal if you're using windows then there's no problem in most cases
you should just use windows because you're not going to be able to tell the difference
in most cases but if you want to take the full advantage of being able to work in the
portal absolutely use windows azure functions are easy to upgrade and doesn't affect
other parts of the website again this is subjective based on uh the code that you have to upgrade right so
or functionality that they change so you know generally it's true it can be easier in isolate to update parts of
your application because they're all little functions but there are exceptions to that azure functions use
industry standard protocols probably https here and communicate with other apis databases and libraries that's true
you only pay while your functions are running that is true if you use the serverless model if you're continuously
running virtual machines behind the scenes then you are going to be paying all the time azure functions
automatically scale to meet the demand of traffic that is true underneath azure will provision more service containers
that is true azure function scale scale to zero cost can depend on your hosting azure function has built-in
monitoring via azure monitor which it absolutely does and it can integrate or has built-in ci cd via azure devops
which is very nice azure functions are event driven and will be triggered based on event data and emit event data
absolutely and if they didn't then it wouldn't be functioned as a service but there you go
[Music] all right let's take a look here at use cases both business and technical for
azure functions so for business use cases you can use them for scheduled tasks reminders and notifications
lightweight web apis sending background emails running background backup tasks doing back-end calculations there are
technical use cases such as sending emails starting uh starting a backup ordering processing
task scheduling database cleanup send notifications messages iot data processing so azure functions are best
suited for smaller apps that have events that can work independently of other websites originally i had some text in
here that said like azure functions were not suited for large applications but more of these menial
jobs around a larger application which is not true you can build a whole application out of functions
now whether that's easy to do with azure functions is another story because it has to do with cold starts and things
like that but i i definitely think you can but this is just kind of give you creative ideas of
how you can use azure functions outside of the normal use case of just building a web application so there you go
[Music] all right let's take a look at vs code extension because this is going to be
the primary way that you are going to be working with azure functions so in order to productively work with azure
functions you'll need the visual studio code azure functions extension and so what
you'll do there is install that once it's installed and you have an azure icon
in your activity bar with the side bar drawer you can look for functions i'm going to get my pen tool out here so the
idea is you install this and then now you have at the top here the functions
tab so you can manage azure functions it's possible to use azure portal to create and update windows host hosted
azure functions but in practice it's not an easy experience with linux it's impossible
this is a very kind of different experience compared to something like aws or google cloud where you can do
basically everything in the cloud but with azure functions you really
really really rely on visual studio code you can use visual studio for net and c-sharp app functions i've never done it
myself but it is an option out there so there you go [Music]
let's talk about runtime so what is an application runtime an application runtime is a compute environment
configured with the necessary software and system configuration to run a specific type of application code so
here uh if we were to create an azure function we have some options and you
can see runtime stack so azure provides multiple applications and runtimes for popular programming languages just going
to highlight that out there because i don't see my my arrow but uh it has things for uh net
so that's c sharp java node.js that's both javascript typescript powershell core python there's my uh
there's my nice arrow there um unfortunately does not have ruby at the time of recording this i really would
like them to have it but if you want something that's not there what you can do is implement your own custom handler
i believe they have an example either for rust or go in the documentation or let's say you want to do ruby
that is something that's on the 204 exam but in practice it's really really really hard to get a working custom
handler and i say that by i tried to make a ruby one it didn't work i tried their tutorial it didn't
work i reached out to the person that wrote the tutorial they couldn't get to work i went to support they couldn't get
it to work so in theory custom handler sounds great if you can figure it out um so you know if there isn't a custom
handler uh follow along you'll know why um but uh yeah that is an option on the table the runtimes provided by microsoft
are just docker containers you can see them on the docker hub um and you know a lot of times you know containers and
functions are strongly related because they're generally almost always running on
containers okay [Music] all right let's take a look at windows
versus linux hosting so when you create an azure function app you can choose either windows or linux as your host and
this actually it makes a big difference based on the capabilities that are available to you
and we're talking about things where there could be performance differences functionality limitations feature
differences but generally you know when we're using microsoft it's always good to attempt to use windows when possible
because their whole ecosystem is built around windows and not so much linux one example here
is that if you are using a linux hosting for azure function apps you cannot edit the function after it's
been deployed and that is a very major inconvenience so just consider that when deploying functions i usually can't
tell much the difference between linux and windows so i'll just choose windows so i
get that additional functionality if you need to determine the exact os that is being used what you can do is go to
docker hub repository because again all these uh all these images or
runtimes are stored on docker hub and there we can see the windows images and the linux images
and notice for linux there is a lot more variation where windows is just the nano server 1809 so that's the windows server
1809 whereas linux there's a little bit more options and mostly debian but you do have one down here that's ubuntu but
there you go [Music] all right let's take a look at templates
for azure functions so azure provides function templates to get you started with common functionality scenarios this
is going to happen when you first create your function now you don't have to use one but definitely this is the route
that you should want to go especially if you're choosing http trigger which is very common so
to choose the template you're going to be doing that in visual studio code so you don't do it in the portal but let's
just kind of walk through the templates we have available so we have http so triggered by http request and returns an
http request you have a timer so triggered based on a schedule blob storage triggered when files are
uploaded updated in a blob storage container cosmos db triggered when processing a
new modified cosmodb document cue storage triggered by azure storage queue messages
event grid so triggered by event from event grid so many azure services can be triggered
as a function through event grid okay as we cover event grid in this course
a lot of sources can be ingested into event grid event grid is serverless event bus integrated with azure service
as we do cover at other parts intercourse uh event hub so triggered by an event hub event so
this is for streaming service bus queues are triggered by a bus queue this is messaging service bus topics the
triggered by uh an event from a bus topic so this is a pub sub model then you have send grid so triggered by an
email event in a third party service center if you ever send grid it's just for sending out emails
and so there's a lot there does it set up the bindings for you probably i never really noticed or checked but i would
think that if you were to set one up for blob storage you would get the binding set up for you in your
function json but we will look actually next at function json
configuration because that is what is next in the course [Music]
so for every single function there is a function configuration file called function json and so there's the
structure there it defines the functions triggers bindings and other configuration settings when you choose a
template you're going to get some default stuff there let's talk about three main things under bindings so you
have the type of binding which is just the name of the binding which will affect the other attributes
that are underneath often that you'll have a direction like in and out uh and then there's the name and the name is
going to vary so for c sharp it's an argument name for javascript it's the key valued list but it's what is used to
bound data in the function and we will definitely cover a lot about bindings because bindings is super
important but just so you know function configuration file again is for trigger
bindings and other configuration settings [Music]
another file that you'll find in your function is a hosts dot json and uh this is for configuring global configuration
options for all functions within the function app here is an example one uh so
as extension for http you can see it has a rope prefix things like that it's setting a custom header and there is a
lot of stuff you can configure here for options so we can do aggregators application insights blob console
cosmodb extensions and then you know http cues retry send grid a whole host of things
and you know not super important for the exam but definitely in practice you will be
going this fall uh and configuring things need be but there you go [Music]
let's take a look at plan services for azure functions and we have three to choose from consumption function premium
and app service plan very different very interesting uh options for you to decide where you want
your compute to run and how your compute to run for functions you do not see a bus or google cloud doing that but the
flexibility here is very interesting so the first is consumption plan and this would be considered serverless and
because it's serverless it can scale to zero and because it can scale to zero that means that uh
generally we will have cold starts so you only pay for the time your code or application is running billing is based
on the number of executions and the duration of each execution and the amount of memory used just pay while you
have functions running and scale it automatically even through long loading times then you have premium
plan these are functions premium this is where we have pre-warmed uh pre-warmed compute underneath so the user has
designated a set of pre-warmed cases which are already online and ready to react instantly azure provides any
additional computing services that are required when your function is running you pay for the constantly pre-warmed
instances including any additional instances needed to scale the azure app in and out azure functions host
instances are added and removed based on the number of incoming events so um you know the idea there is that
you're not waiting for the container runtime to spin up it's already running but it's not going to scale to zero
and you know i mean you can still say that it's serverless in some regard i would consider that now being a fully
managed service but not necessarily serverless but you know that's debatable the last
here we have dedicated plan this is with app service plan and this is where we're doing vm sharing and this one is
extremely unique uh you do not see this in any other cloud provider but this is where when you use app service
for other apps your functions will run on the same plan virtual machine at no extra cost you may scale it out manually
by adding more virtual machine instances for an app service plan you may also have
auto scale enabled optional when you have existing underutilized virtual machines which also operate other
instances of the app service so i think what happened was um
people were using apps app service right platform as a service and they just had un under they weren't utilizing all of
the virtual machine compute and so a customer must have said hey can i just
take my functions and utilize or utilize the unutilized compute on my plan and that's azure's
like sure and they made that a thing so it's just like an interesting way to
save money but it is one of those things where it is a little bit hard to understand because it's not uh as
isolate or modular in terms of functionality where we see other cloud service providers but those are your
three plans so there you go all right let's take a look at triggers and bindings for azure functions we
talked about them before but let's give them a little bit more attention so triggers and bindings let you avoid
hard-coding access to other services and abstracting away boilerplate code keeping your functions lean
because the idea is that you won't have to add that code into your functions so
here is our graphic which represents a function we have input bindings output bindings we have a trigger there so what
is a trigger a trigger is a specific type of event which causes the function to run
it defines how a function is invoked and a function must only have one trigger uh triggers can have associated data which
is often provided as the payload of the function and you know as we saw in the introduction that we can have multiple
sources of inputs that that get added there in this example here we have http but we're bringing in data from
blob storage as well and obviously the main trigger uh which http is an input as well so you can have
additional inputs at the same time then we'll take a look here is what is a binding
um and so my arrow's a little bit messed up but the idea is binding is defined if your
function is connected to another service so we have input bindings and output bindings
so the data from bindings is provided to functions as parameters bindings are optional and a function can have
multiple input and output bindings optional but often used which we'll find out but let's take a
look at what supported bindings are in the next video [Music]
all right now let's take a look at what supported bindings are available for azure functions there's quite a few here
and we'll quickly go through them just so you have a scope of idea so we have blob storage azure cosmos db
um or cosmodb i always want to add that s in there azure sql dapper which is a
run time we cover it in the course it's a runtime for distributed applications for
microservices event grid event hub uh http web hook that's gonna be the most common one you'll use iot hub kafka
mobile apps notification hub that's for push notifications queue storage messaging rabbitmq that is an open
source um viewing system sendgrid which is for sending emails service bus which is both
queuing and pub sub signal r which we don't really talk about much in this course but it's an open source.net
library to send asynchronous notifications to client-side web applications table storage timer uh
twilio all right and so what's also important to note and sorry i could not make this graphic better i really really
did try to present this better but it's just there's so many here but um the idea is
that some don't support triggers so like azure sql does not support triggers mobile apps notifications don't table
storage and twilio does not then you can see we have a big gap of inputs for a bunch here and a bunch there
it's not super important to know for the exam but the fact that that you don't get triggers inputs and
outputs across all services and there are some cases where the api version matters so like if you are using
version one some things are not going to be supported and even some cases version two like mobile apps notifications don't
have um uh don't have support so in some cases you do have to use uh version one
functions but most cases you'll be using version four so hopefully that gives you kind of
a broad idea of supported bindings but we're not done with bindings we'll be looking at more of it okay
[Music] let us talk about binding directions so all triggers and bindings have a
direction property in the function json file the direction of the triggers is always in and the input output binds
either is in and or out and some bindings support a special direction called in out
which a little bit confusing but it's nice to have a extra option this idea is we have a direction and triggers again
are always in and then you have in out and in and out um the trigger is defined alongside the
input and output binding so trigger will have the same as the input type but with trigger appended so for example if we
have blob that would be our input binding then we'd have blob trigger as the trigger and that's how you would
know the difference right so again just iterating see where it says uh a type here if it was just http
then it would just be a a binding but if it's a http trigger then it's a trigger with a with also an
in binding if you use the in and out only the advanced editor is available
via the integrate tab in the portal just so you know because there's a visualization of integrations that we
see in the azure portal and so they're talking about that in and out option there okay
[Music] okay so now what i want to do is just go through a few scenarios of triggers and
bindings to help cement the idea of their utility so we'll take a look at our first scenario so every hour you
want to read a new log file delivered by your application and you'll need to transform the data to be ingested
in your nosql or sql database that resides in cosmodb so here is our visualization we have our
function which is python for fun it's going to output the cosmodb so we need a binding out to there
it has a trigger um here that's just triggered by an acp request to the timer and we got
the blob storage so you use a trigger timer i guess actually the http shouldn't even be
there i think that's just a mistake but you use a trigger type timer because it's a scheduled job that will run at a
specified time the trigger will be in for the blob storage and out for cosmodb because whenever function runs it'll be
on a timer and when it executes it'll read data from blob storage process the data from blob storage and then write
some data in cosmo db so the key thing here is that this runs on a schedule you're not you're not
even though it's there you're not uh invoking it like with a an api request this is invoked on schedule let's take a
look at scenario two every time someone signs up to your application you want to trigger an email so here there is an
http request coming in and it's going to send out to sendgrid to send out that email so you want to develop an api that
allows you to send an email after request is received and you'll use http trigger because it's
an api that will be triggered based on this request for the bindings you uh you won't be accessing any data well the
function starts so the in direction is none and you'll be uh you'll use send grid
for the out direction which allows you to send messages via email looking at our third scenario consider a scenario
in which you're using a cue service and you want a function to process a storage message in one queue and enter a new
message in another so the idea is you have a queue again it's fun it's triggering something and it's going to
output to something so in this case you'll want to use um into it says consider uh scenario q
service where you want to function the storage message in one queue and enter in another so probably
this this icon's old but we probably should have this icon repeated twice because this is the sendgrid logo so
just imagine that we have we took this icon and we put it over here very sorry for that uh graphical error but um the
idea here in this case you want a trigger type of queue and a binding type of cue for the direction out because
you're not accessing the queue in the in direction you read the data from one queue and process it to create a new
message and then to write to a different queue which may or may not be connected to another service or function so there
you go [Music] all right let's take a look here at
bindingexpressions so in function json file and the function parameters code you can use binding expressions that
resolve to values from various sources so most expressions are identified by wrapping them in curly braces i say most
because there's an exception where you do not use curly braces but the idea is that
it allows you to have kind of dynamic content within your functioned json file and there are a variety of
different binding expressions we have app settings trigger file name trigger metadata json payloads
guid current date and time let's go take a look at what those look like the first is app settings so when
you want to change configuration based on the environment you're going to use percentage science this is the only case
where you use percentage signs uh it's confusing because it's the first example instead of curly so notice that
there's curly sorry percentage signs there um
and then for trigger file name this can be used to change the path of a file name works for both in and out
directions so here we have the curly's here to say for the file name
for trigger metadata many triggers provide additional metadata values these values can be used as input parameters
for c sharp f sharp or properties for context bindings and objects in javascript so for example if we're doing
azure queue storage sugar supports the following properties q trigger dq id etc etc
uh and then you'll notice that they are available here's the cq trigger is here q trigger
that's typed in camel case mode for json payloads when a trigger payload is json that means the the the data
being passes json you can refer to its properties and configuration of other bindings and the same function in the
code so you know if the payload was blob name and then hello helloworld.txt
do blob name and that would put that here okay um and
if some of your properties and your json payloads are objects you can use the dot notation so that's just a common
thing for json or yeah json or javascript um for for if you want a globally unique identifier
you can just do rand guid and you'll get something that looks like that if you want to have the current date and time
you do date time you're going to get the current date and time in this format so there you go
[Music] let's take a look at local settings file this is a file we saw uh when we were
looking at the anatomy of the local files and so the local settings file stores
app settings and settings used by your local development tools this file is called
local.settings.json is expected to be at the root of your project folder because the local settings json may contain
secrets such as connection strings you should never store this in a remote repository so make sure you add it to
your dot git ignore file here's an example of one so first we have is encrypted so when the setting is true
all values encrypted with the local machine key for values these are a collection of
application settings used when running a project so notice we have function worker runtime azure web storage uh or
job storage mybinding connections things like that the host customize the functions host
process when you run it locally and then connection strings used only by frameworks that typically get connected
strings from the connection strings area there okay okay so as your function core tools lets
you develop and test your functions on your local computer from your command prompt or terminal so it's a cli
uh and you type in func and allows you to do a whole host of things so let's take a look at the commands we have for
top level commands we have init so create a new function project in a specific language logs get the logs for
functions running in kubernetes cluster new creates a new function in the current project based on a template run
enables you to invoke functions directly which is similar to running a function using the test tab in the azure portal
start starts the local runtime host and loads the function project in the current folder deploy which was replaced
with function kubernetes deploy then we have command groups that contain their own set of sub
commands so we have func azure so this is when you're working with azure resources durable for
durable functions extensions for managing extensions if you're doing working with kubernetes
and azure functions you can do that if you are messing with settings that you do settings if it's with templates for
this available templates you can do that so there you go [Music]
all right let's talk about custom handlers these are lightweight web servers that receive events from
function hosts and they can be written as long as it supports http primitives and these are
really great in situations where you want to have a run time that is not supported for like a
language like go rust ruby uh or a runtime environment for a specific technology where you need a
bunch of libraries preloaded like ruby on rails or deno and so within your function
function json the idea here is you'll have a custom handler section and this is going to just basically define the
executional execution path because it basically is running a program custom handlers you can use
triggers and inputs and output bindings via the extension bundles
okay so let's kind of visualize that so we really understand that this is like a separate thing that's running
so the function will pass along the payload to the custom handler the communication between the function and
custom handler is via https requests and response the custom handler runs on white lightweight web servers
the underlying compute will likely vary based on what service plan is chosen so it's going to be other places i couldn't
really figure out where they were the custom executable for the custom handler is bundled along with your
function code so there's that there uh some some things about the application structure
to implement a custom handler your application must have the following a host json file a local settings json
file a function json file uh a command script or executable which will run on the web server
uh and then the following diagram shows how these files look on the file system for a function named myq and a custom
handler so here you can just kind of see it's not the prettiest but i mean it
gets the point across the idea is you have a folder here's your function json and this one happens to use an
executable in practice custom handlers are super super hard to do uh and i think i mentioned this before
but you know i was trying to set up a ruby one and it didn't work and i try to set up
the the tutorial one that's in the docs for rust or go that didn't work i reached out the person that made it that
didn't work i reset the azure support they didn't know how to do it so um if people running custom handlers if anyone
figures it out please tell me uh hopefully i do figure it out and we get included in this course if we don't just
understand if i couldn't do it then i don't it's not a big deal if you don't know how to do it okay but for me i
personally would have loved to run ruby because that's my favorite language to use
and that's what i would use on azure functions [Music]
okay so let's say you are making a custom handler you deploy it and you end up having an error that says runtime is
unreachable what do you do well not a whole lot but i can i can tell you that there are some things that you can
narrow down because a runtime could not be running for multiple reasons um so
uh you know the idea is that the runtime couldn't start and so there are some common reasons that azure suggests that
it could be so it could be storage account was deleted storage account application setting was deleted storage
account credentials are invalid storage account is accessible daily execution quota is full
app is behind a firewall so there are your possible reasons for things to go wrong uh i just wanted to share that you
know i ran into this and i i didn't necessarily get through it but i i wanted to share it to you possible
solutions okay [Music] hey this is andrew brown from exam pro
and we are taking a look at durable functions so durable functions is a serverless
compute extension of azure functions that allow you to write stateful functions the extension introduces two
types of functions so we have orchestrator functions this defines stateful workflows implicitly
representing state via control flow and entity function which manage the state of entity explicitly representing state
they define workflows in code so no json schema or designers are needed but honestly i would like one because other
providers like aws have like no code solutions so it's kind of weird they try to make it like sound like an advantage
but it isn't really an advantage they can call out functions synchronously and asynchronously uh the
output from uh from the called functions can be saved to local variables they automatically
checkpoint their progress whenever the function awaits local state is never lost if the process
if the process recycles or the vm reboots so durable functions currently support
the following languages so we have for c-sharp javascript python f-sharp
powershell and powershell seven to use durable functions you need to install a library specific to your language in the
root of your function app project such as npm install durable function so there you go
[Music] all right so if we are going to be having stateful functions we are going
to need a variety of different patterns for setting up
different kinds of serverless workloads let's take a look at them the first is function chaining so this is the pattern
of executing a sequence of functions in a specific order often the output of one function needs to be applied to the
input for another function durable functions allow us to implement this pattern concisely in code then you
have fan out fan in so this is this is a pattern of executing multiple functions in parallel and then waiting for them
all to finish fanning out can be completed with normal functions by having the function send multiple
messages to a queue fanning in is much more difficult because we have to write code to track when the q triggered
functions end and store function's output the durable function extension handles this pattern
with relatively simple code we have async http api so this pattern addresses the problem of coordinating
the state of long-running operations with external clients a common way to implement this pattern is to have an
http call trigger the long-running action then redirect the client to status endpoint then they can pull to
learn when operation when the operation is complete durable functions provide built-in apis that
simplify the code we write for interacting with long-running function executions the monitor pattern refers to
a flexible recurring process in a workflow such as polling until certain conditions are met a simple scenario
such as a periodic cleanup job can be addressed with a regular time trigger but it is interval but its interval is
fixed making managing instance lifetime difficult durable functions enables flexible recurrence intervals task
lifetime management and the ability to create multiple monitor process from a single orchestration a little bit more
here so we have human interaction so many automated processes involve some form of human interaction humans are not
always available and responsive as cloud services which makes invoking humans an automated process tricky so
automated process must must allow for this and they often do so by using timeouts and compensation logic
then we have aggregators so stateful entities this pattern is about aggregating event data over a period of
time into a single addressable entity the data being aggregated may come from multiple sources be delivered in batches
or may be scattered over long periods of time the aggregator may need to act on event data as it
arrives external clients may need to query the aggregator data so there you go
let us take a look at azure functions on kubernetes now i don't think this is on the exam but i just thought it was very
interesting a thing that azure functions can do uh and so i just kind of wanted to run quickly through it so you can
deploy any azure function app to a kubernetes cluster running on k running k-e-d-a if you don't know kita it is
kubernetes event-driven auto-scaling uh service and allows you to set up auto-scaling based on events from
various cloud native services and third-party services so in this case
it could be for azure azure function apps and so when we were looking at the core
uh core tools azure power tools whatever you want to call it um they had an opera they had the option to
deploy to uh kubernetes so i'm gonna assume that you'd probably be running an aks azure kubernetes service uh but
let's just read through some of the things here so kubernetes-based functions provide the functions runtime
in a docker container with event drive driven scaling through kita
kita can scale into zero instances with no events occurring and out to n instances so it scales to zero it does
this by exposing custom metrics for kubernetes auto scaler so horizontal autoscaler using function containers
with kita make it possible to replicate serverless function capabilities in kubernetes cluster these functions can
be deployed using aks virtual node features for serverless infrastructure i'm really curious like how they're
running and like other things there but that's outside of the scope of this course but you know if you're into
kubernetes and i do have a kubernetes course um you know it's just interesting to see that that is a functionality
there and so i just wanted to call that out for our kubernetes fans okay [Music]
hey this is andrew brown from exam pro and we are looking at azure virtual machines which makes it easy for you to
choose your own os compute memory and storage and launch a server within minutes
all right so let's take a quick look here at vms for azure and this is a highly configurable server that relies
on virtualization meaning that you're running a server without having to actually buy and maintain physical
hardware that runs it you will still have to manage things at the software layer so you would have to apply your
own os system patches and installing the configuration packages but the nice thing is that you're not dealing with
that hardware it's just going to work for you and some things i want you to know about azure virtual machines is
that the size of the virtual machine is determined by the azure image uh and so the image defines the combination of
vcpus memory and storage capacity the current limit on a per subscription basis is 20 vms per region i would think
that if you wanted more you could just use another subscription i don't know if they have service limit increase for
that but that might be possible as well azure vms are billed at an hourly rate an instance uh or a single instance that
you launch is going to have availability of 99.9 percent when you're using premium disk if you have standard disk i
have no idea what it is um but it's going to be less than that if you want to get 99.95
availability you're going to have to launch two instances within an availability set and you can attach
multiple manage this to an azure virtual machine now just to give you uh like a quick
visual of what's actually happening when you launch an instance because when you do launch a virtual machine there's
going to be other networking components that are going to be created or associated with you and you definitely
need to know all these components here so i have this nice diagram we're going to do a quick run through with it and i
want you to just know that when you do launch a virtual machine it actually does give you a list of all the
components that it creates this is actually very common with most azure services and sometimes what they'll do
is they'll put it within a resource group for you so that they're all grouped together which is very nice
but let's just run through these common components that you need to know such as the network security group and this is
going to be attached to your network interface it acts as a virtual firewall with rules around ports and protocols so
that is what's protecting our virtual machine you have the network interface that handles ip protocol so that's how
the virtual machine talks to the internet or other things on the network you have the virtual machine
itself or the instance that's a the version or the uh instance that is currently running
it's going to get a public ip address assigned to it so that's how um people from the internet can actually access
the virtual machine and then you have to launch it within a virtual network uh and so that's you're
going to have that v-net so you'll either you'll either choose one or you'll be creating one uh during that
wizard process so there you go [Music] so now let's take a quick look at the
options for operating systems on azure vm and let's just define what an os is uh so the os is the program that manages
all other programs in a computer and the most commonly known operating systems are windows mac os and linux and when
you want to launch a virtual machine the way you determine what operating system you use
is based on the image that you choose okay and
azure has a marketplace so they have so many different kinds of operating systems you're going to have every
possible option you want and microsoft has also partnered with specific companies to make sure that they're
providing you images that are updated and optimized for the azure runtime so let's do a quick run through of these
supported or partnered os's so we have seuss we have red hat ubuntu debian freebsd
then you have flat card container linux rancher os which is for containerization but nami binami is more like images that
have preloaded software on them they're very popular for their bitnami wordpress you have mesosphere
and then you have images that have docker with it so you have a lot of options open to you there and i always
forget about jenkins and jenkins is on the end there now if you want to bring your own linux version you can totally
do so all you have to do is package it into a virtual hard disk or a vhd if you've never heard of vhds uh these
are these are just uh virtual hard disk formats that you can create using hyper-v software which would be on your
windows machine and just be aware that azure doesn't support vhdx which is a newer format it
only uses vhd okay so there you go [Music] so let's take a quick look here at cloud
and knit and this is something that's not going to be on your exam but it's something you absolutely should know and
need to know because it's an industry standard and it's just something that uh you might like if i didn't show it to
you no one's gonna show it to you so let's just get to it and so cloud init is this multi-distribution method for
cross-platform cloud instance initialization and it's supported across all major uh public cloud providers so
azure aws gcp it's provisioning systems for private cloud infrastructure and bare metal installations so what is
cloud instance initialization well this is the process of preparing an instance with configuration data for the
operating system and the runtime environment and so uh the idea is that you're going to have cloud instances
that are initialized from a disk image and instance data right so the image is whatever the the the vm image is but
you're going to have data such as metadata user data and vendor data and i'm not going to get
into all the data i just want you to know the one which is the one you'll be working with is user data
and so user data is a script that you want to run when an instance first boots uh and so um
if you've ever used aws and you launch an instance you'll see a box in the the wizard that says user data and that's
what this is uh in azure i don't think they make it as clear i don't think they call it user data but if you're
pragmatically doing it with um arm templates when you're doing infrastructure as code
that is what you're doing you're using cloud in it underneath i just wanted to make that association to you so when you
see that word user data you think cloud init and so cloud in it really only works
with linux distributions so um it should work with all the linux distributions on
azure i'm pretty sure it's not being used for windows machines but there you go
[Music] azure virtual machines come in a variety of different sizes that are optimized
for specific use cases and the way azure groups these are into types so that would be like general purpose compute
optimized and sizes and sizes is kind of a weird word because i've seen in other places in
their documentation called series or sku family so you'll see me using those three terms interchangeably they
definitely confused me when i was looking at it at first but let's just go quickly through some
of them there's a lot of them so they're not going to be all of them on the list here
but we'll just quickly go through it so the first is general purpose these are balanced cpu to memory ratio testing
development small to medium databases and low to medium traffic web servers this is pretty much what you're going to
be using unless you work for a company that has a lot of money and the one you're going to see the most is called
b1 because it's super super cost effective and that's the one i'm always using when
i'm launching linux machines you'll see me using it through follow alongs here in this course the
next one here is compute optimize so that's high cpu uh uh to memory or ratio good for medium
traffic web servers network appliances batch processes and app servers so this just has more cpu here uh the most
common one is the fsv2 or the f series then you have memory optimized so high memory to cpu ratio great for relational
databases being able to hide large caches and in memory analytics and we have a variety of skus there
then you have storage optimized so these are high a disk throughput and if and ideal for big data sql no school
databases database warehousing and large transactional databases and we just have lsv2 for those
then you have the specialized ones for gpu for heavy graphic rendering video editing model training and
interferencing with deep learning so available with single or multi multiple gpus and so we
have a variety of skus there and then you have your hpcs your high performance compute and these are the the most fast
and powerful cpu virtual machines with uh with optional high throughput network interfaces all right and i just want to
point out that there are more here um but uh i'm just showing you ones that are occurring so there are previous
series um here like we're not showing basic a all right like the a series is not being shown here now when you want
to actually go see this full list here it's very easy you just launch up a virtual machine and uh you're going to
be able to choose two things you're going to choose the first the image so that's going to be
what do you want to run right so it's going to be ubuntu or windows machine and then you're going to
choose your standard size image i believe that if you choose certain images not all sizes will be available
to you because some images have to be optimized for
those uh series sorry i'm just drinking some tea here uh and so there's that b1 series
and you can see in canadian dollars it's 9.72 which is a pretty good price for me and you can explore all the costs in
azure which is really nice you don't have to go to a separate marketing website you can just open it up and you
can just sort by the lowest cost which is what i always do and they have a lot of filters there so
there you go let's quickly talk about azure compute units so azure compute units
also known as acus provides a way of computing comparing compute so cpu performance across azure skus remember
we said skus is the same thing as sizes or series so you'll just see us varying the terms
on that acu is currently standardized on a small so standard a1 and you saw prior that a1
is a previous series so it's not something that you generally be launching but everything's based off of
that one and those are given a value of 100 it's just an arbitrary number to um give it a
point against other machines and all other skus then represent approximately how much faster that sku can run a
standard benchmark i don't fully understand the math behind uh their their stuff here but i generally get the
idea here and i'll just show you quickly a quick comparison so here we have the a1 to a4
series against the d1 d14 series a1 is our baseline at at 100 acus
and it has a ratio one to one from vcpus to core and so you can see that for d1 to d14 it says 160 points to 250 points
and so if i did the math here right i think i did uh it's going to be basically 60 to 150 percent more
performant than choosing an a1 to a4 so that's how that works so there you go [Music]
so here's a neat feature that azure virtual machines allows you to do and you can actually do this with a lot of
different services with azure and it's the azure mobile app and so what you'll notice is that when you're using virtual
machines they might have this little qr code and if you have the app installed what you can do is scan that code but
honestly you don't even need to scan the code if you just have the app installed connect your account you can
log in on your phone and check out the virtual machine how it's performing and you can take basic actions there i
believe as well you can access the cloud shell so it's pretty darn cool um and so i just wanted you to know about that and
if you haven't just go ahead and install it on your phone i think it's available for
um android and ios i have an android phone so i didn't check ios but um yeah go check it out
so sometimes when you're launching a virtual machine you'll have the option between generation one versus generation
two when you're choosing images and i want you to know about uh this stuff here and to understand it we need to
know a little bit about hyper-v so hyper-v's is microsoft's hardware virtualization product and it lets you
create and run a software version of a computer called a virtual machine and each virtual machine acts like a
complete computer running an operating system and programs and so if you're still kind of
wondering what hyper-v is if you've ever heard of virtualbox which is used for running vms on mac and linux
this is the this is basically the exact same offering it's just windows offering okay and it already comes pre-installed
on windows 10. so you might have to toggle something on i can't remember it because it's been a long time since i've
had to configure it but the point is you have a windows machine you're already
ready to start using hyper-v and hyper-v has two generations of virtual machines so the first one was generation one and
this uh this will support most guest operating systems and then generation two supports most 64-bit versions of
windows and more current versions of linux and freebsd operating systems and there are these two here but
neither one or two is bad so they both have their own use case so it's not like you should always use two
and there's a big list online but we're not going to get into that full list but anyway so we see that hyper-v has
generation one and two vms but azure also has them as well but they're not one-to-one the same as hyper-v
generation so if you look up hyper v uh like the hyper v generation um feature set you might not have that in azure so
i wouldn't rely on that list but what i definitely want you to know between azure gen 1 and gen 2 that's a
key difference is that gen 1 is bios-based architecture and gen 2 is u-e-f-i
based boot architecture and if you've ever built a video gaming machine you definitely know the difference between
these two if you don't it's just the way the machine boots up it's the first screen you see
um and it's going to change what kind of configuration options you have and you you uh uefi has a lot of great features
in it so for instance it has secure boot which verifies the bootloader is signed by a trustee
authority and you can have larger boot volumes up to 64 terabytes
and if you want to know more you'll just have to look up uefi because i'm not covering it in this course i just want
you to know that's the the key difference there and hyper vvms are packaged into virtual hard disk formats
so that's going to be either vhd or vhdx files okay one thing you might want to be able to
do is actually get into your virtual machine and we have a lot of different options available to us in fact we have
three ways to connect to our virtual machines we have rdp ssh and bastion and so if you don't know these things we're
going to walk through them right now the first is secure shell and this is a protocol to establish a secure
connection between a client and server this is how you can remotely connect to a
azure virtual machine via a terminal ssh happens on port 22 via tcp port 22 is something you should absolutely
remember as ssh and you will generally use an rsa key pair
to gain access you can also use username and password but most people don't do that anymore
another way of getting access to your machine is by using rdp which is remote desktop protocol this is a proprietary
protocol developed by microsoft which provides a user with a graphical interface to connect to another computer
over a network connection basically what it means is that it's going to open up a window and you're going to be able to
see like another windows desktop in it and control it remotely so this is how you can remotely connect
windows server to a virtual desktop rdp happens on port 3389 via tcp and udp i think there might be some other ports
but this is the port that you need to know i just remember from aws because there
were some extra ports i had to configure but that's a different story
then you have uh your bastion which is your third option and so azure bastion which is actually an
azure service is a service that you can deploy lets you connect to a virtual machine using your browser and azure
portal and so the reason why this is so useful is that
if you have let's say a chrome desktop which doesn't actually
it can't actually use ssh client or can't use the rdb client this just gives you another way of using the
browser so you don't have to install those clients but we'll get into that shortly
here okay [Music] so let's take a closer look at ssh and
the most common way to access your vm or to authenticate yourself uh so that you're allowed to get into your vm with
ssh is by using ssh key pair and so the idea is that you're going to have to generate out two keys that's why
it's called a key pair there's two keys and so you'd use that command ssh hyphen keygen hyphen t rsa and that is
something that is definitely etched in my memory it's definitely something you want to memorize because it'll be with
you throughout your entire career and so you're going to have a private key and a public key and so the way these keys
work is that the private key should remain on your local system and not be shared with others and the public
key is going to be stored on the vm and when you go ahead and ssh you're going to provide your private key
as part of the ssh command and it's going to try to match it against the public key to
authenticate you and so that is what you normally do ssh hyphen i provide the public key see where it says dot pub
that's how you know it's a public key and then there's the actual address of the server um and that's how you're
gonna gain access okay so let's take a look at the process of using rdp
so the first thing you'll want to do is download the rdb file so when you go and try to connect to that virtual machine
you're going to go ahead and download the rdp file that's going to download this rdp uh
this file that has rdp extension you're going to double click that file and you're going to be prompted to fill in
your username and password this uh this username and password is what you specified when you created your virtual
machine and i just want you to know that if you are on a mac you can install the remote desktop
client from the apple store but if you're on windows you already have rdp client installed there's
nothing extra that you have to do [Music] so let's take a closer look here at
azure bastion and this is an intermediate uh hardened instance that allows you to connect to your target
server via ssh or rdp and it will provision a web-based rdp client or ssh terminal i'm going to tell you honestly
i never really thought of this use case for bastions prior to this but the reason
like bastions are good or have still a utility these days is because some people are using a google chromebook and
they're not going to have a terminal so they can't use ssh or if they're on windows they can't sell the
putty client or if they're on windows they can't they won't have the
remote desktop client and so the only way is through the web because a google chromebook's all a browser and so this
is one of those utilities for a bastion now bastions don't necessarily are just for this use case they're definitely a
very good and secure way to connect to your virtual machines and having a way of auditing who has access to what
but this is the use case that azure is putting forward with bastion here and so the idea is that when you want to create
a bastion you're going to have to add it to a subnet uh in your v-net or you're gonna have to
make a new subnet and it has to be called azure bastion subnet and it has to have at least a size of four slash 27
so 32 addresses so that's what you're gonna have to do so um you'll just have to go in and add
in that subnet there and then once you do uh you can go ahead and launch your
bastion so let's first take a look at rdp uh and so rdp uh we saw how to do it but let's see what it looks like with
with azure bastion so what you do is you say connect with azure bastion for your windows machine that's going to assume
that you want rdp you're going to just put in your username and password as you normally would and boom you're in your
machine that's all it takes let's take a look at how you deal with ssh so you know if you're using a linux server
it's going to obviously want to use ssh because you don't have rdp for that and so
you can actually use either the ssh private key or the the username and password
i recommend using always ssh private key let's take a look at that process so
you'll enter your username in and then you can switch over to ssh private key and then
the dot pem file which is downloaded locally to your machine which is your private key you can just
uh select it and then that will upload it and use it in the comparison and boom you're in your machine so
that's just kind of a cool service that they have there let's just do a quick comparison between
windows and linux servers this is going to be very obvious for people that are used to
running windows workloads but for those who uh grew up on linux i just want to make
sure you understand some of the caveats of windows that you might not be used to so
obviously azure allows you to run both windows and linux instances or vms
and so let's just talk about windows first so when you launch a windows server just like your desktop computer
it needs a windows license to run whether it's windows 10 or whatever windows server you're using but
i just want you to know that when you launch your your server you don't necessarily have to have the
license on hand and so if you launch a server without a license um all it's going to do is say windows
is unactivated and you're going to take some manual step to activate that license and i just want to tell you that
so that you're not afraid of launching a windows machine because you think you're going to get charged a license fee uh
it's not going to happen you have to take some manual intervention to do that so do not worry about launching that
windows machine you can bring your own license via a hybrid license so some people like
enterprises already own their licenses and they just want to uh they already made a deal with microsoft and so they
just want to reuse those instead of using whatever azure provides because it can be cheaper
you're going to set a username and password because you're using rdp to access the machine so you're not using
your ssh key pairs um windows machines require
larger instances okay so if you're running windows 10 or whatever it needs a lot more memory and so you're
gonna have to run it on a b2 which is a lot more expensive than a b1 and that's because it runs a full desktop
environment all the windows servers do this and so this is generally why i like
running linux so just be aware not to keep your windows server running for very long if you
don't have a lot of money and you're just learning to pass the certification let's talk about linux really quickly
here most versions of linux require no type of license so i think like you can get red hat you
might need a license for that if you want support you set either a username password or ssh key pair you can utilize
smaller vm sizes because you're not running a full desktop experience could you run
like a full desktop experience possibly i haven't ever tried to do it on cloud myself
uh unix and like space uh are unix and linux-based systems traditionally are terminal based environments right so
you're going to be sshinging into them and you're not going to have something visual okay so there you go
[Music] so let's talk about update management at the start of this section for virtual
machines i had said that there was some uh requirements ongoing software patches
and things that you had to perform on your virtual machines and so that is something that you have
to take care of but if you wanted to automate that process this is where you can use update management so that's what
it'll do it'll install operating system updates and patches for both windows and linux
and not only can it do it for your virtual machines on azure but it can do it on premise and even in other cloud
providers okay and the way this is going to work is
you're going to go to your operations tab where it says guest plus host updates and you're going to click on
update management now update management looks like it's its own service but it's actually using
azure automations under underneath and so that's what is installing the agent and the agent is
the microsoft monitoring agent mma so that's how it knows what to do you have to get
that installed on your machine so update management will perform a scan for update compliance a compliance scan is
by default performed every 12 hours on a windows machine and every three hours on a linux
machine it can take between 30 minutes and six hours for the dashboard to display
updated data from managed computers and i just wanted to point out a little bit about azure automation so because it
does more than just obviously update management but azure automation you can enable update management change tracking
inventory start stop vms during off hours features for your servers and virtual machines and these features have
a dependency on on a log analytics workspace and
therefore required linking the workspace with an automation account so there is a little bit extra that we have to have
but there you go [Music] hey this is andrew brown from exam pro
and we're going to be launching our own bastion using azure bastion services so there's two ways to set this up we can
go to bastions over here and create a bastion this way or we can create one after we've created a virtual machine i
prefer the latter so let's go ahead and do that and launch ourselves a new virtual machine and so we could either
use launch a windows server or a linux server today i'm going to be launching a windows server
and what we'll do is go down here i'll make a new group we'll call it the enterprise
and as we do that we'll just name this uh enterprise d
and we'll launch that in canada centra or us or central us that's fine with me uh 2019 data center gen 2 is totally
fine if you go here you're trying to find it you go hit select
and we'll choose gen 2 data center it is expensive but uh we're not going to be using this for very long
for the username i'm going to put data and for the i'll just make it azure user
to make our lives a bit easier and then we'll put testing capital t one two three four five six
testing one two three four five six and we will go down below we're fine with the settings here we're gonna go
next to disk we're gonna leave the disk to premium that's fine uh we'll let it create a new network that's totally fine
management is okay and we'll just actually go hit review and create
and now we'll just hit create so that it will go ahead and do that it'll tell us that it's in progress
and we'll just wait a little bit here i'll see you back in a moment all right so our instance is ready so
let's go ahead and go to this resource here and then on the left hand side you'll have connect and so i'm going to
show you and you don't it's not necessary for you to do uh this step because you're going to i'm going to
show you how to connect be the bastion but i'm going to go ahead and download this file the rdp file this will only
work if you're on windows by the way well i guess it'll work on mac but you'd have to install the um
uh the rdp service for the users with windows there and so here this is azure user
we're going to type in testing with a capital t just double check that there i'm going to log in make sure that this
works uh do we'll do that one more time oh you know it's testing one two three four
five six there we go we'll say yes
and we'll just make sure that we can uh remote desktop into this just before anything else and there we go so that's
all good to me i don't need to see any more we'll go over to bastian we'll say use
bastion and this is going to set up a bastion service in order to use bastion you need to have another
address space defined for it it makes it really easy to
make it here so i'm just going to go 10.0.1.04 and we'll go ahead and hit ok
and so down below it's going to choose an address space we have a security group um i'm just
going to put it for none i don't think i want one on that and if we scroll on down here we have
the resource group so we're going to put in the same resource group and we'll go ahead and create that
so now before this they didn't have this really nice wizard used to have to go and and create all those things
individually in your virtual network but this is really nice it does take a bit of time for this to provision so i'll
see you back here in a bit that took a bit of time for that to create that bastion but it is ready to go and so now
that we have it we can go ahead and utilize uh this connection here and so right away it i think it's setting up
for rdp here so what we'll do is type in azure user and then capital t testing one two three
four five six we'll go ahead and hit connect and so notice that i didn't have to use
an external application i could just run it in right here it's all in the web browser so that's
pretty much how the bastion works i can't remember the pricing on bastion i think it's a little bit of money so i
don't want to keep this laying around here but this is great if you let's say you're on a chromebook which are
becoming really popular where you can't install native applications uh or you're just having issues because you're on
like linux or something like that so there you go that's all there is to it we'll go ahead and clean this up
and so i'm just going to go here find the resource group and we'll go ahead and delete i'm just
making sure that bastion's within there so it is good and there we go
[Music] so we just launched a virtual machine for linux now let's go ahead and launch
one for windows i'm going to go to the top here and type in virtual machines we'll go to the first link i'm going to
hit add add virtual machine and what we'll do is we'll create a new
group the last one i had was called bajor i'm going to call this one cardassia
cardassia and i'm going to name this machine also named cardassia
and this time what we want to do is is we want to move over to a windows server i find the easiest one to learn with is
the windows 10 pro server just because i find these ones a little bit daunting so i'm going to go windows 10 pro
and then what we're going to do is go choose a larger size this is not going to work we cannot run a windows server
on a b1 ls so we're going to have to go a little bit larger and we don't have to go too much larger
here but the idea here is that there's going to be a more expensive spend here so we're not going to be running to keep
this running for long but here we have the b2s that is the appropriate size to run this anything smaller i don't think
is going to work and we are going to put in a password here so i'm just going to put in
cardassia and we'll do cardassio123 put a capital on it i guess
we'll just do this here and i'm just going to go back and lower case this one
and we're going to allow the inbound port of 3389 because that is what rdp needs i'm going to confirm that i have a
windows license i actually don't but the thing is you can still launch one for your test purposes it'll just complain
saying you're not activated so there are some limitations but it's good enough for us to learn okay and so now that
that is all great we'll go next to disks we're going to go with premium or standard ssd this time
uh we're going to go ahead and hit next and go to networking it's going to create us a new v-net which is a great
idea we're going to let it create a a network security group on the nick just like before we'll go ahead and hit
next uh we'll leave all these options alone this all seems fine to me
uh and we'll hit next review and create and we'll go ahead and create this
server all right and so that's gonna go ahead and create it so i'll just see you back
here in a moment when that's done deploying all right so after a short little while here it looks like our
windows server is now deployed so what we can do is go to that resource if you wanted to see what it's deployed it's
the same stuff as always you have your network interface card your virtual networks
nsg the ip address but let's actually go to that resource now and so let's see how we can gain access
to this virtual machine and so what we can do is use rdp luckily i am on a windows machine and so
i already have the rdb client that i can use so all you got to do is download the rdb file
and then once we have that file i can just double click it and i can open this up if you're on a
mac you can download the app in the app store and so i'll go ahead and type in my password so my username was cardassia
and then my password was capital c a-r-d-a-s-s-i-a one two three we'll hit okay
and then it'll give us another one we'll say yes and now we are in our virtual machine so
there you go how cool is that i'll just give it a moment to load up but this is a full uh windows 10 pro
uh and as i said before you know we don't actually have a license so if you're afraid of spinning it up because
you're gonna get charged a license fee uh for windows you do not have to worry that's not gonna happen you have to do
some manual intervention uh for that to happen so we'll just wait a little while here for this to load um it is not we're
not using the most powerful machine so it does take a little bit of time and so we just hit accept here
and here we are so we are on uh we have our nice windows machine here whoops
i don't know if it has any games let's go take a look maybe play minesweeper
um no maybe maybe you have to download the store i'm not that familiar with windows
machines but um so there you go so we'll go ahead and close that and you know if we were using the bastion it's the same
process you saw how we used it with ssh but if we had the bastion and it's so much work to set one up we already did
that before uh but all you do is enter your credentials in on the page just as we did and it and it's just a lot easier
that way uh so let's go ahead and just tear down this machine we're all done with it so i'm just going to hit um
uh delete and uh if we find that resource group we
should be able to easily delete them all i find the easiest ways to go up here go to all resources
and then there's the resource group there and then hit delete resource group and then i'll type in the name of it
which is cardassia and i'll delete all those resources but after that's done always just take a double check
on your all resources tab and just make sure that those resources are gone just because sometimes they
stick around but there you go that's as simple as it was to launch a windows machine
[Music] hey this is andrew brown from exam pro and let's take a look here at virtual
machines which i consider the backbone of most cloud service providers and azure keeps it really simple by calling
it virtual machine so we can go up here and type in virtual machine and make our way
over here but right now i don't have any options because i'm using a tenant that doesn't have a subscription applied to
it so what i'm going to do is go switch back to my original tenant and this one has this subscription applied to it
and so what i'll do is just click back up here and now we'll just type in virtual machines
and i can now see uh i have options of creating virtual machines let's go ahead and go create a linux one first and then
we'll go ahead and create a windows one and then we'll see how we can connect to it all right so first we'll go to the
top here hit add we'll click on virtual machine and we're gonna be present with a lot of
options so we'll have to choose a subscription and so there is mine i want to probably create a new resource group
here i'm going to call this one bajor and i will name this uh bajor again
and i'm going to launch this in us east i'll just set it to one availability zone for the time being
then here we have what we can choose as an image i can click on see all images and choose from a variety of them so if
i didn't want to use ubuntu i could launch something else like debian or something like that
but really i just want to stick with uh ubuntu because i'm fine with that version with 18. uh then here's what
what really matters is the size because that's going to affect our cost so if we click on see all sizes we have this
nifty table where we can sort the cost it's just loading the cost here it's dynamic
this is going to be based on uh what your base subscription is so if you're in canada you're going to see canadian
prices in the us you can see us prices etc etc and i care a lot about cost here so i'm
just going to sort this by cost and here we have the b1 ls which is very cost effective
we have a ram of 0.5 gigabytes and some other options there so we'll go ahead and select that there
and we have a couple options we can use ssh public key or we can uh utilize a password and so i think what we'll do is
use an ssh public key because that's pretty much the standard there we're going to name the username beige or if
it lets us probably won't probably want some additional options there nope it's okay
oh great and we'll go ahead and generate a new key pair and i'm just going to name that one
bajor and uh we have some options here for inbound rules uh so you could set to
none this is just setting up the nsg for you but we're probably going to want to have
that port open for ssh because that's how we're going to make our way back in here if we're running a um
like an apache server we'd want to have port 80 open so we can go ahead and do that
we'll take a look at now disks so here we have options between premium standard and standard hhd
i just want this to be cost effective so i'm just going to go with standard hdd but generally you you want to have
at least a standard or premium ssd when you're running real web development workloads
then there's encryption here and so it's always turned on by default which is great they also have this option of
double encryption with the platform managing customer manage key we're just going to leave that as default enable
alter discompatibility that's not something we need to do here because we are not using ultradisc
and here you can see that you can attach multiple uh disks here so i can go and do that but that's not
something i need to do today and some other advanced options which we do not care about we'll go over to
networking and so it's going to end up creating us a new v-net for us and we'll create a new subnet for us and assign it
a ip address uh it will also set up a nics network security group so the next
security group is not going to be applied at the subnet level it's going to be applied at the nic which is
attached to the um dc2 there and so we'll just leave it to basic we're going to allow inboard
inbound ports for port 80 and 22 that was carried over from earlier
we can put this behind a load balancer but i don't think we're going to do that right now
we'll go over to management uh we have some additional options here for monitoring uh this is all okay here we
can set it to auto shutdown actually i'll leave that alone you can also enable backups here
we'll go advanced and now we have this option here for custom data
i covered a section on cloud init and they don't call this user data but most other providers will call this user data
so we could provide a bash script or additional information here if we wanted to
then down below there's some host group options we're not going to worry about that in proximity placement group this
is really important if you need to have instances nearby uh i think this is pretty common with um
what's it called high capacity workloads hfc i can't remember the initialism right now but we covered in the core
content then we can tag our resource here uh we'll just leave that alone i don't care about tagging too much but
generally it's good to tag in practice and then we will get to review and create our server here we'll go ahead
and hit create and then we'll have to download our private key so we can utilize it later
and so that's downloaded there and now we're just waiting for it to deploy this and i'll see you back here
in a moment so we had to wait a little bit there and finally our deployment is complete
and we can go ahead and just review all the things that it created so notice that it created the virtual
machine it created a network interface a nic for us the nsg the network security group
the virtual network and also a public ip address when i do clean up a lot of things i
always miss are these ip addresses and i know that azure gave me a warning that said hey you're about to spend
uh 700 yearly on ip addresses because you weren't releasing them so when we do the cleanup step i'll
definitely emphasize about deleting those ip addresses and how to go about that
let's go take a look at the actual resource now so here we are and you can see we have a
lot of options the left hand side such as the disk so we can see the disk options there and
there's other additional security options let's go take a look at how we can go
ahead and connect to the server and so there's different options here so we have rdp ssh and bastion since we are
using a linux machine we're not going to be using rdp that's really for windows um but the trick here is that um
i would need to have a client on my computer to connect like um i think it's called putty if you're on a
linux based machine it's a lot easier uh and certainly i have the linux subsystem installed so i could probably
um connect that way but i figured let's just go ahead and connect via bastion because i think this
is a pretty darn cool feature so let's go ahead and create ourselves a bastion and this will take a little bit of time
here but we'll go ahead here and just set up subnet so to associate a virtual network
to a bastion they must contain a subnet with the azure bastion subnet so they actually have a special subnet for it
so what we'll do is we'll just go back to our um our server here
which we call bajor and i think we can find the subnet through here
so on the left-hand side if we go to networking we probably could find it that way um
so i'm just looking for that security group um in there
it should be um maybe it's not there if it's not there
um well you know we could just go over make our way over to subnets it's not a big deal
because it's called bajor it's pretty darn easy to find to begin with and so under subnets here what we need to do is
add a special one here and just gonna remember how this works um so you need to create a subnet called
azure bastion subnet with a prefix of at least 27. so we'll go ahead and add a new subnet
and we'll call it that uh we'll take out the space there and the range is 10 0 1 0 etc so we'll
just do it on two zero dot um two dot zero four slash twenty seven
uh ten zero zero two is not contained yeah it overlaps so we'll do two
let's not contain the virtual network address space oh right so we have to add the address space first
oops we'll just hit cancel here that's okay we'll discard that we'll make our way over to address space and
we'll go ahead and add 10.0.2.0424 they'll give us a pretty darn large range there
and so now what we'll do is go back to our subnet and we'll go ahead and create that there
and it said it only needed 27 so we'll just give it only 27. we don't need to go bigger than we need
and that should be okay we'll go ahead and hit save and it shouldn't take too long so now
that we have that we can go back here and we'll give give this another go here uh it's there so it shouldn't be
complaining maybe what we'll do is just start from the the start here again
yeah there you go the azure portal is like that a lot where you'll have something set up and it has the old
state of it and so you just have to trust yourself that you know what you're doing and you have to go back if you
don't have a lot of confidence a lot of times you'll get stuck and you'll think okay i don't have it right but i always
just try again and hit refresh because the azure portal is very inconsistent so we're just going to wait
for this to create this does take a little while to create so i'll see you back in a moment
so after waiting five minutes our bastion is now created and so what we can do is without even using a putty
client or having to use linux directly we can just connect through via the bastion
so here we'll see we have some options here so we want to do ssh private key from local file
okay and what we can do is go ahead and select our bajor key
and then i'll just scroll down here and hit connect oh um and i think we made the username
bajor and we'll go ahead and connect now and it's complaining about a pop-up here
so we'll go up here and say always allow and we'll try that again
and then we'll say allow again and so now that we're into our server here let's go ahead and try to install
apache and see if we can get at least the default page running um so this is using ubuntu if my memory serves me
correctly it should be app get install apache 2 and we'll just hit y for continue
and we'll just wait for this install doesn't take too darn long and after a short little wait there it
finally did install also if you notice this little icon here we have a little clipboard here
i don't seem to ever use that there so that's fine now when you install apache we might
have to go ahead and start it up so let's just take a look to see if it actually is in the running directory
here so we'll go to cd var www and so that's where the default directory is right
but we can just check to see if it's running by doing a ps aux i think it's httpd
or we can say apache here and so it looks like it's already running so that's pretty great for us
and since it's running on port 80 and we've opened up port 80 we should probably be able to access that
here so let's go back to our actual virtual machine so we'll go to virtual machines
and we have that virtual machine running i'm just going to click into it because i just want to find out its public ip
address so here it is there and for lucky this will just work just copy the clipboard
button right there and look at that we have the default page isn't that cool
so that's all there really is to it and i could even update this page you don't have to do it but i'm just going to
update it for fun actually i probably have to restart the server so maybe i won't do that
but yeah so we connected through the bastion so that was pretty darn easy we probably could have also used um the
cloud shell to connect um but maybe we should we could give that a go as well
since we're all done here let's go ahead and do some cleanup the first thing i want to do is the easiest way is
actually to go to all resources here at the left-hand side and this really gives you an idea of
everything that's running your account so i actually have other stuff in here that's not relevant
but the idea is that all of our stuff is running within a resource group and so i'm just taking a look there i'm
not saying this is all resources here um see if you see resource group here yeah they're all there right there so i
can go ahead and click that and so everything more or less should be self-contained within here see all that
stuff you can even see the v-net is part of it as well and so if i go ahead and delete
this resource group it should delete all this stuff so i'm just going to type bajor to confirm
and we'll go ahead and delete and that should do a good job of cleaning up all those files
i'm not sure if it'll delete the ip it should right there but if it doesn't what i recommend is after everything is
deleted just go back here to all resources and just double check to make sure they all
vanish because when this is done they're all going to start to vanish from this list
and if there's anything remaining you'll know because it's still here right so just be careful about that that's all
i want you to know and so that's the linux part and so let's go ahead and actually now set up a
windows server [Music] hey this is andrew brown from exam pro
and it's really important that we know how to monitor our virtual machines so we can kind of check their performance
or do some diagnostics on them and so a couple things we're going to look into we're going to set up some virtual
machines and we're going to look into automation accounts uh log analytics and metrics and alerts okay
so the first thing we'll need to do before we do anything else is make sure that we have particular resource
providers enabled so go to the top here and as always go to your subscriptions and we will click into our subscription
go all the way down the bottom click on resource providers and there's two in particular we're looking for uh one is
for alerts so it's alerts management so make sure that is registered if it's not press that button
and then we want insights so make sure insights is enabled as well they should be by default but just in case they're
not uh you should go ahead and do that so we're going to need some virtual machines i'm going to set up three and
the reason i want to do that is just to show you that there's many different ways in azure to do the exact same thing
and sometimes when you do something it just works and sometimes it just doesn't and it can work again and not again so i
want to give you a few different options available to you so the first thing we're going to spin up is a virtual
machine and this one in particular we'll make a new group we're going to
call it dax and so we're going to choose for this one an ubuntu linux i'm going to switch it over i'm going to choose a
minimal install so we'll choose this one over here which is the ubuntu server minimal
and uh what i'll do is name this and there's a bunch of different daxes in star trek so we're going to go with
kurzon dax here and we'll put all this in east us so wherever it defaults here so if that's
where it wants to go that's where it's going to be but let's pay attention to where that is okay
let's make sure that we're using a very inexpensive instance i'll give it a second to load if you do
not see the cost just drag these over until you do it just depends on your screen right
um so once this loads it will tell us that this is probably the cheapest i think
that is the cheapest because something's going to go lower than at 500 megabytes of ram so i think we're already set to
it i'm not going to wait for the load to take forever notice the red line there is no error is that's just what it shows
and so i'm going to go for password authentication here and what i'm going to do for this account is i'm going to
type in um dax as the username and then testing with a capital t e s t i g one two three four five six because it wants
an upper case a lower case some numbers and a certain length and that's what works for me
so now that i have those filled in ssh we're gonna keep that open we're gonna go to disk you can leave it on premium
if you want i'm going to go to standard i want to save some money just in case i forget to turn this machine off it's
going to set up some networking for us which is totally fine we're going to go to management
and i believe so there should be guest os install here it is not here it's the next step i
believe where is it i know you're in here let's go back one step here i have a
feeling it's on this page because you can install the um guess os metrics right away
so i'm just seeing here i cannot seem to remember where it is i might have skipped over it because i'm
going so quick here um well maybe it's not set here so when we
go to do the other virtual machine we'll see if it's actually there but for this one we'll just launch it without it
which is totally fine i was going to do one that way anyway and so we'll go ahead there and create
that virtual machine okay now while that is going and by the way if you think you call
that number that's not me so uh you know you can call if you want but it won't go to me
so that's going to go ahead and deploy that what i'm going to do now is go and launch another virtual machine in the
same uh in the same environment so this one this time around is going to be again
it's going to be linux we're going to choose dax here and this one is going to be
[Music] jadziadax and we this time around are going to
choose ubuntu server lts so this one's a minimal install and this one here has a
bit more to it we're going to choose gen2 uh this is now 4.86 which is strange it
looks like it was the same one before i don't know why the other one was still more expensive because where it said 40
dollars but maybe i was mistaking the red means nothing as per usual it's just as you're uh tricking us we're going to
call the username here dax testing with a capital t-e-s-t-i-n-g you know what i'm trying to say here
and i'm going to do uh testing one two three four five six and we will go next to disks we'll stick
with premium for this time it doesn't really matter if we go to advanced here there's no uh interesting options there
this is all fine this looks good we go to management so now notice here so we didn't have it for the minimal
install but for this one we do we can actually put in the guest os diagnostics and what it will do is it
will have to create itself a storage account so that will be in azure storage to store
our diagnostic data and so that seems like a good idea to me so what we'll do is we're going to go to
next but notice we didn't have it for the minimal install probably because it doesn't have the um the agent what's
called the wag agent installed for it to do that by default so just notice that it varies on some machines
we're going to go to next we're going to hit review and create and we're not done yet we're going to
create one more virtual machine and this time it's going to be a windows once i hit the create button here
uh we're going to go back to virtual machines and this time i'm going to launch a windows machine and i'm going
to use we're going to put in the same group here i'm going to call this one uh tobin or let's call it
little dax here and this time i'm going to choose a windows server so i'm going to go to
windows server here we have a lot of options i'm not that great with windows servers i'm going to choose a gen
2 windows server here it's not going to let us get away with this b1 l it's not
big enough you need a lot more compute and memory i think it's like two and four uh but i just remember it always
choose ds ds2 v2 um which is pretty hefty it's pretty expensive maybe i could choose something
cheaper like the ds1 no i need to i need two cpus so ds2 v3 that's when you're launching a windows
server if you don't do that they'll complain and that'll probably be pretty darn
expensive but we got to make sure that we just shut this off and get through this as quick as possible i'm going to
choose the same thing as always so i'm going to i'm going to put in dax and then for the password i'm going to do
testing 1 2 3 capital on that t 4 5 6 testing 1 2 3 4 5 6 because it wants 12
characters we're going to let it do on the rdp that's totally fine uh sure premium ssd we're not going to
keep this around for very long we're going to scroll on down go to management
and notice that you can enable os diagnostics here i'm not going to do that here just so i can show you show
you it in the other variation we're going to go to next we're going to go hit review and create
and we're going to go ahead and create this instance whenever it lets us do it there we go
okay we'll hit create and now it's deploying so we have these three virtual machines
we're creating and so what i'm going to do is make my way back to virtual machines here and so i have the two
there i'm going to give it a refresh make sure i can see the third one creating
we don't see it just yet but these ones are running so
what i'm going to do is open up kurzone i'm going to open up jedzia and one has the guest um the the uh the guests ins
or the what's it called diagnostic settings it's the guest metrics installed and the
other one does not so if i scroll down here so i just want to show you there's a lot of variation and sometimes you
know you do it one way it works another way it doesn't so for jetsea it was installed on default so that's the one
that is using ubuntu standard and then this is the minimal install where it's saying hey do you want to
install the metrics and you have to pick a storage account if you didn't then you would have the option to create one here
it's called dax diag so i just want to show you we're going to go ahead and enable those
guest level monitoring metrics okay so
we have a storage account and that's where the data is going to go um
and this will probably fail because i don't think it has the necessary things installed but i just wanted to show you
you could do it that way but what another thing we're going to need is we're going to need
um a log analytics workspace so all a log analytics workspace is is a place to store your logs it is
essentially a data lake for a lot of different resources so you can put all sorts of resources under here and that
way you can search across them see how this failed if you ever have a failure like this the
way you investigate this as you go in here even though it says assass token for the storage account couldn't
generate it's kind of a misleading thing here because if i actually click in up into my
notifications we go to our activity log we can see kind of what something has happened here so i know this failed
it might not show up here yet sometimes it takes time to show up so we'll revisit this later but it would show
red errors here and if we looked in there it would tell us that python's not installed on that machine because it's a
minimal linux install so if you run into those problems that's the reason why some vms are a bit easier to work with
than others but we're looking at log analytics and again it's just a place to collect your logs but i want to see
because i think jitsi is the one that actually is the one that is working properly but if we go down to
metrics or logs it's going to say hey you if you want to use logs you're going to have to go
ahead and create you have to enable it but what it's really doing is just creating a log
analytics workspace so if i click this here what's going to tell me is say hey you
don't have one we'll create a default one for you um i don't want to do it that way what i
want to do is create one myself and then link it through there or what i could do is make it through automation
so what i'm going to do is i think what i'm going to do is i'm going to do through automation
so there's another service called automation accounts and this particular service is for configuration management
update management and it's a way for us to keep our vms up to date and so this one's going to be called dax
oh it's too short it doesn't like that dax automation and i'm going to choose the resource
group dax it's e-s-e-s and here it's going to say azure run as
account so this is going to give it more permissions so that it can run as a contributor role
and that way it will have access to resources to do what it wants to do so we'll go ahead and deploy that and then
once this is deployed we'll create our our workspace through this because i found an issue where if i made this and
tried to link it just didn't link and i don't want to have those problems so i'm just going to make my life a
little bit easier by making it through here so under here if we go to update management
it's going to say hey you need a workspace in order to use update management if you're wondering what
update management is it keeps your vms up to date by applying patches so if we drop down here we're just going
to go ahead and create a new workspace and i'm going to do it that way because it'll be automatically associated with
that workspace on creation and uh we'll let that go there for a bit but while that's setting up a new
workspace for us if you go over to run books this is the power of well i'm not i'm not
it wants to make that thing there so i'm just going to click ok there don't worry about that discard but the whole power
of this is that it can apply patches it can it can manage the state of your vms
but if you need to do something like run something like a a series of run commands
they have a bunch of different run books here uh so there's a variety of different
kinds and i think we can click into them if there's something about sql that sound kind of how to use an escrow
command on a automation runbook so that's just a runbook so runbook just
means like a series of run commands something that you're automating on your server and you do it through here um so
update management looks like it is now working uh we didn't press that enable button uh initially but it worked there
but if we wanted to have vms under here we'd have to go ahead and add them manually
um and so we're just gonna wait for those to show up alternatively if we go back to our
server or sorry one of our vms we should be able to add it this way it will just tell us to go that way anyway
um so there should be like update management or something in here could be configuration management
um yeah that still would be through something right so it'd probably be
still through automation but i could have swore there was maybe guest and host updates
yeah so here it's just saying go to azure automation so you don't really set it through here you have to associate it
through azure automation and so these are ready to enable so i'm going to say go ahead and enable those i don't know
where our windows server is we definitely spun one up it's not a big deal if we don't have it it's not really
crucial to this tutorial but it would have been nice to show um the windows 10 there
i'm gonna go see what happened to our windows server oh it is there okay leela's there
so we'll take a look at that in a second uh but we're just adding those there we're not going to really apply any
patches i just want to show you how to associate them there um and just to highlight the fact that there are run
books which are pretty cool and you can also do other things like if you need to like install python packages or stuff
like that uh to to do stuff you could do that as well um so there's a bunch of interesting
things in there but it's kind of like a side dish to the follow along here so this would have
created our workspace so now let's go over to our workspaces and we have one here and if you're just
curious what it looks like if we go here something super exciting you just put the name and region in and it has the
default page you go so you're not really missing out on creating one uh by hand there
but underneath here what we can do is see if our virtual machines are actually associated uh with
this here so if we go under virtual machine you're gonna see that these ones are connecting
which is interesting because i didn't do anything to connect them i'm not sure why that's happening but i'm going to
hit connect here and that will connect that a lot of times i'll come in here i have to
manually connect them i don't know if it's because we put them in the uh what do you call it within the uh the
automation there if we go back to automation which ones did i choose i already forgot
update management um they're not showing up here now but i
know they're there maybe we triggered it when we were doing there but anyway the point is is that
you have to make sure that those are um configured so that it's actually sending data over to the workspaces so
that we can then use the logs and things like that so if we go back to our virtual machines
over here you can see they're connecting and this takes forever
while this is going let's go take a look at storage accounts and actually a better way would just be
to go look at the virtual machine individually here so we'll just go back to my virtual
machines and i go to the resource group we scroll on down here we should see a storage account within here
let's just type in storage there it is whoops
and uh the thing is is that we can actually view some of the data in here so if you go to open explorer you'd have
to install this it's for windows i don't i think it's for mac as well but once you have the storage explorer uh open
you can just click open here and it'll open up once it's installed on your computer of course
and what i can do if we have any data it might not be collecting any data yet but if we go
down to our storage accounts here oh here it is down below and expand it under tables see how this
wad that is the name of the um uh the agent that is collecting data and here you can see that it's already
collecting stuff all right and it's collecting for jet zia dax
and so i'm just kind of looking at the data that's collecting nothing super interesting
but the thing is is that we're gonna make our way over to logs because by default
azure collects host metrics so things about the instance like cpu usage but there's
things that it doesn't collect like memory and other things like that
so that's why we need to turn on this setting here the diagnostic settings
so we can collect those metrics these are called guest metrics so we have memory network file disk disk and
storage so if you wanted to know how much storage was left on the on the actual uh disk on with the with the
virtual machine or how much memory was being used you'd need that uh set up here
uh once this is enabled uh you can send data over with uh into azure monitor and other places um it's not showing app
analytics i think that's for or log insight whatever the name of the service is it's for uh
that one i think is for the windows one so we'll see that when we go over there but syncs just means we're gonna send
data uh from here to to um azure monitor right but we'd have to have preview to see
that uh nothing that interesting here uh but if we go over to logs now um this should
not be showing up here anymore so if this is still showing an enable screen then we got to go back to
our log analytics we'll click into here we'll go down to virtual machines and we'll see if it's connected yet so it
still says it's connecting right this one's ready though and i haven't even done anything with that one yet so
what we'll do while we're waiting for these linux ones to uh to spin up we'll go over back over here to our virtual
machines and we'll go find that uh windows machine which i don't know why it's not showing up here
there it is and what we'll do is we'll go ahead and connect to it
so i am on a windows machine so i'm going to shape here
and i'm going to click on rdp and i'm going to download that file i'm going to double click this file on my
windows machine and i'll be able to open this up instantly it was called dax testing one
two three oops testing one two three four five six looks good to me we'll hit okay
we'll say yes because if you want to collect um
uh particular metrics like guest metrics you have to enable like when you're on windows server you have to enable it to
uh to send performance counters so i'll give it a moment here it doesn't take too long
while that's going we should look to see if we actually have um the uh
the settings set up here the diagnostic settings and so this is what i'm talking about the enable guest level monitoring
so we want cpu utilization disk and network so we'll go ahead and click that there
and while that's going to go back here and so this has now started up i don't know much much about windows
servers but i do know how to turn on at least last time i did i know how to turn on the performance counter so see
here it says online performance counters are not started so i can right click this and then just say
start performance counters and that will have it start sending data over to our storage accounts assuming that this is
enabled here so this is going to go ahead and install wag into
the windows server so it can send back metrics uh and so we got a lot of things going
on here but let's go back to jadzia and um i want to see now if i have any if i can use logs now
there we go so it's now connected to the workspace so it knows where to send that data
and you can pull up when you go to logs it'll show different types of default queries if you're in the actual group
here you'll have the same screen but it'll show you across all possible resources so it's a little bit easier to
figure out like useful queries for virtual machines when you do it through here and there's some ones there's some
that we can run right away so if we go over to um
the diagnostics we could or actually availability is a good example so heartbeats will tell you
if the server is live so a server will have heartbeat so every whatever second or minute it's going to
say hey i'm still alive that's what a heartbeat is so we'll go ahead and run that
and we'll give it a moment here i'm not sure why it's crying that doesn't work i'll just click off
and try that again so go back to availability we'll hit run and it has the right scope if you're
wondering how it knows what to check that's all based on scope there and for whatever reason it just decides
that it doesn't really want to run right now that's a bit frustrating but if it's not
working for this one we can always check our other one which is curizon and we can go see if this one works
that's why i made multiple servers because i knew one of these might not work right away
so i'll go ahead and run this one here it says no logs are configured so that makes me think that
our workspace here if we go back over to here and we go down to virtual machines that they're actually not linked oh they
still are connecting there you go so it's a bit misleading it looked like we could query but we still can't
um let's go take a look at this virtual machine here our little one still
and let's see if we can do anything with it yet so we go down to
diagnostic settings it looks like it's installed and so if we go to performance counters these are
the counters that it should be collecting data on you could do custom if you want and change those values
but we're just going to leave it basic for the time being go ahead and save that
and i believe we can do logs over here on this one so we'll go over to availability i'm going to load that into
the editor this time instead of hitting run i can hit run and it will return and says that it has a count to seven so
seven heartbeats that's good if you wanted to create uh rules you could easily do that there's actually uh
some nice ones here if we go into here um this won't work unless our
our our metrics are working our guest metrics but there's a bunch down below here
i'm just trying to see one that might be interesting for us to set so you might go here
and i don't think i like that one say this one here i'm just going to
clear this all the way out oops so here they even say to create this
alert just hit the plus new button here so we go here and
sometimes it does this it doesn't actually bring it over which is weird so you have to go back
i told you like this the the ui for azure is super super buggy so if you don't have any confidence uh
don't worry it's it happens to all of us um so we'll go back to our alerts and i'm going to load that into the
editor and what i'm going to do is just copy it and um
we're going to go ahead and hit run and so here we're getting some data back i think it's going to complain about the
time generated but if we go and create a new alert oh man it just does not want to uh copy
anything over a and so it's like i can't even paste
we'll go back to here i'm going to copy this i noticed this because i had it double clicked maybe
just just creates one what you have selected and so it should complain about this but
it's not so that's great and so we can go here and we can put a threshold to five
i'm just putting any value here and that gives you a monthly cost for that alert so you hit review and create
please correct the following details where oh the alert name okay
my alert and we will hit review and create we'll hit create
and so there we can create an alert so we could create alert like our memory is running out of stuff and those are kind
of useful things to do so talking about metrics
we're going to make our way over here and notice we have host metrics and guest metrics so host metrics are
general ones that are available to us and guest metrics are memory and stuff like that
so if we have that you're going to notice now we have like disk and memory and things like that if we didn't have
that installed we wouldn't be able to access this information so what i can do is go ahead here and
try disk time i don't know if it'll show us anything right now because it might have not collected enough information go
over to virtual host we'll have different metrics here so a standard one would be like cp usage
okay so that one is displaying there the poor performance metrics always take a little bit more
time to collect so i'm not sure if i'll stick around to show you that but if we go over to availability sometimes
we need to go debug performance like you can hit run and this will tell you if it's actually
collecting information so see right now it shows me nothing so it hasn't collected anything for process memory or
processor but if it did then we could go and use these other ones down below here so we have
free disk space so that's a really useful one to have which we can't get metrics on that right now
but that's what we've been trying to do is collect that information so let's go back to our workspace
and what i'm going to do is take a look at our virtual machines that are
connected and this one has an error um and that's kurzon and i'm not surprised because
again it's a minimal windows linux instance so probably couldn't install the agent so that's expected but
i just wanted to show you that edge case if we go into the jet zeo one which is the one we were waiting for this entire
time we should hopefully be able to see some metrics so
if we go down here and we choose yes metrics i want something with memory here
uh like percentage okay so it's still not showing me anything and that's fine if we're not
getting any metrics what we can do is we can go log into this instance and give it some data so that it does something
all right so what i'll do is go all the way to up to the top here into my overview i'm
going to copy this public ip address we're going to open up our cloud shell and make sure that it's in bash all
right and we're we're going to type in ssh um
dax and then we're going to paste the ip address in we'll hit enter we'll
say yes and the password is testing123456 with a capital t
and what we can do here is we can install a tool called stressng what that tool does is it will
help us do stress testing and so that allows us to make some metrics
so we'll go ahead and install that via snap snap is the new package manager for
ubuntu used to be apt-get or apt it doesn't take too long to install and while it's installing i actually
have a pre-made uh line here to check the memory or to fill up the memory
i'm just going to go ahead and paste this line in so this is stress hyphen ng vm bytes and it uses awk it's going to
do memory availability and it's going to fill it up to 90 percent so we want to have 90 memory usage i'm going to go hit
enter i'm just going to let that run now it's not outputting anything but it definitely is running there we go and so
we're just going to leave it there and so soon enough we should have some information
and i think if i go there yep it's still there so it's fine so now if we go over to
um down to our logs and we go back to
performance and we just choose this one here we might see something now
again it takes time so you know it looks in the 15 minute uh window here but i just want to see that it's
collecting anything for me it doesn't matter what it collects just collects something for me you know
so i think it'll just take a little bit of time while we're doing that we should make our way back over to our workspace
because there's some other settings that are kind of interesting that i noticed if you go over to agent configuration
um you can collect additional information over here so we have performance counters for
windows and linux i think we have to set the memory one here if we wanted to actually appear here
so i don't know if there's a way to set all of them
but i'm gonna just choose some here so i'm gonna do memory used we'll do logical use space here
we'll go here and do free space i think that might be the reason why it's not showing up for us
maybe process time here we did use memory used percentage of use space
we will do um do reads and writes so here we have some for our linux
so i think if i we set that then we should be able to see some information uh syslogs is just a type of logs that
you can collect we're not going to look into those today but yeah that's the
management there we could also try setting some here maybe one for memory i can't remember
what the one for memory is called here because they do all of this kind of stuff
like windows has so much stuff i can't make sense of it i think we might want paged owls
and page so that went called there
bytes we'll give that a go and we'll see if that starts collecting information
i'm going to go back over to uh here and oh this is kurzon that's the windows
machine we're going to go to the linux machine here and we'll give it another go
see if we get anything this time around so we'll go back to performance
we'll say what is being collected run okay and so i think we're just going to have
to wait a little bit so i'm just going to stop the video here and i'll see you back in maybe 15-20 minutes okay
all right so i'm still waiting here and i don't seem to be seeing any data but that's okay because there's another way
we can kind of approximate this and we don't have to wait forever especially if you don't have a
lot of time like me the thing is is that azure has a log analytics tutorial and within here they
have a demo environment that i'm very familiar with and so this can approximate exactly what i'm trying to
show you here if you notice at the top here this is microsoft azure monitoring logs
demo log blade so if we go to queries
and we go down to virtual machines and we'd look at what data is being collected we can hit run here
and this is the kind of information we are trying to see right this is what we want to see what it's actually
collecting that's okay we're not doing remote desktop anymore here and if we go to
virtual memory available we'll just clear that out there and run that
this is kind of the information we're trying to find all right so i just wanted to show you that when you collect
those metrics that you can run these queries and this is that language up there with cousteau
and that's pretty much it that's all i really wanted to show you so let's go ahead and tear down all our stuff it's
all self-contained within a resource group so what we can do is go to all resources in the top left
corner there and we called it dax so we can search for
dax here whoops actually it's all over the place we just click here into the name
and we'll hit delete resource group type in dax we have a lot of resources created for
this one and there you go so yeah hopefully that helps you understand uh those
relationships between automation accounts where we can run run books do patching uh you know stuff like that
install python stuff um you know the fact that log log analytics is basically a data
lake for all your data that you can create alerts uh that you have metrics you have guest os metrics which collect
those three values the memory the network and the additional cpu information or disk space
and just the variance there so you don't get stuck so there you go [Music]
hey this is andrew brown from exam pro and this follow along we're going to learn about managed identities or system
managed identities so that we don't have to pass credentials or hard code them into our virtual machines or other
environments so what we're going to do is launch ourselves a new virtual machine this is going to be a windows
server so go ahead and hit create we'll create a new virtual machine and from here we'll say create new and i
say my managed identity probably say id so i'd have to spell the whole word and we'll
just say my managed id and from here central us is fine the zones are fine
but we're not going to use ubuntu we're going to hit see all images and from here we'll choose windows 10
and there's a lot of options here so i want windows 10 pro um
it's a bit hard to decide so i'll go gen 2 2 1 h2 hopefully you'll be okay doing this as well um it's using standard b2s
46 so just make sure you're not overspending so that should be fine i think we need at least a b2s to run this
um but we're not going to keep it around for very long so username i'm going to go admin user and for the password
capital t testing one two three four five six i can do that again capital t testing
one two three four five six exclamation mark just because of the requirements such a pain that's what i always do in
these follow alongs and we're going to use rdp to connect to it because we do have to uh remote into
it we'll just say i confirm we'll hit review create of course we don't actually have a license but we'll be
okay because we're not doing anything too serious and we'll give it a moment here
and we'll hit create so that is going to create while that is going i'm going to
uh we'll actually have to wait a little bit but we'll make our way over to the resource group because this is where
we're going to be adding an im policy at some point so if we go over to
i am over here and we'll wait we'll actually wait for this to provision so this takes
quite a few minutes so i'll see you back in maybe 5 10 minutes because we are launching
windows server they just take longer so after waiting you know five to ten minutes the environment is ready so
we'll go ahead and go to that resource and we're going to want to go connect and choose rdp this is going to be
remote desktop protocol i believe that's what it stands for if it doesn't not a big deal just know it's called rdp and
so we'll have an rdp file we'll double click it we'll hit connect this is going to open up the room remote desktop
client and so we will type in admin user and then capital t testing one two three four five six if you're on a mac you're
like how do i do this well it is in the mac store just type in remote desktop it can open here i'll just show you what
i mean over here we do rdp mac you can see that it's free to download
so it's just you download that and you can open those rdp files so we are just waiting for this to spin
up this takes a little bit of time so just give it a few minutes all right so after waiting a little
while the environment is ready so hopefully you can see what is going on here and so what we want to do is open
up powershell so we'll type in powershell here and we will right click and run as
administrator this does have an outbound connection the internet so i know that uh we are okay for our connection and
then this is where you will need to install azure modules because it's not
pre-installed and this part really sucks because we have to wait a lot more again so i hope you like
waiting a lot but anyway what we'll do is right click this properties here and we will bump this up because this font
is super super small and uh we'll type in install module
a z and i'll install all the commandlets for azure or azure and we'll wait for some prompts the prompts are a bit
finicky hopefully we can just do a for all but you might have to hit yes a couple times
but this part is just so so so slow there we go so notice it doesn't let me say all we'll just say y
and i know that it's going to prompt us again so we'll just wait a little bit here the progress will show up at the
top it's kind of glitchy um yeah so now we'll do a for all and so now we just wait a long time now
it seems like it's not doing anything but you will eventually see something appear up at the top here that shows you
that there's progress but you just have to be very patient yeah so it'll look like that i'm just
showing you what it looks like and i'm gonna go back and pause till this is all done because it takes forever
all right so i think it's done it's sometimes hard to tell because it'll just kind of like blank out here
and it's not doing anything so i'm not 100 sure but
what i'm going to do i'm going to see if i can just open it twice this is just the powershell experience
right and um to test this we'll just do a connect and see if it works so i'm going to do
connect i just want to close the old one if it's actually still doing something connect account
identity and so if this prompts us with something then we know that it's working correctly
so identity not found and that's fine so the thing is if we just just did this it
would actually prompt us to log in so see it's prompting us to log with microsoft so this this worked
um but we need to actually connect an identity to this virtual machine so that's what we're gonna do so we'll
minimize this we'll go back over here so go into your uh the resource group that we had before
under iem and we're going to add a new role assignment and from here we will uh choose contributor because that
will give us enough access for stuff we'll choose manage identity and we will
choose system identity no no no no that's not what i want because it should show the virtual
machine here maybe it's because um we opened this before it was done creating so that's
why it's not showing up so we'll try this one more time we'll go to contributor because it should show the
virtual machine manage identity and select and
[Music] i want to see the virtual machine
weird so i'll click way out i'll go back in i'll go to i am again we'll go to oh you know what
we launched this virtual machine we forgot to check box something so when you launch the virtual machine
you're supposed to checkbox manage system identities here so this would be
[Music] where would it be i only ever do it through the setup i
never ever have to retroactively do it oh what a pain identity maybe system managed so we'll just say on and
we'll save it uh grant permissions do you want to enable a system energetic yes
but normally when you you do the virtual machine hold on here we'll just go pretend like we're setting
up a new one we're not actually going to set one up but when you are setting it up
and we go over to advanced no no management we would check box check box it on uh
here that's not what we did so we forgot to do that but we turned it
on here so hopefully it works without us having to restart the image if we go to role assignments over here
i mean that's just another way of getting to this page so it doesn't really matter for the resource group
what we'll do is go back to our resource group we will find it here
and we will go to [Music] i am and we will add a role assignment
and we will add contributor we will go next we will say manage identity we will select we will go to virtual machine we
will select this one we'll hit select we'll review and assign we'll review and assign and then it will sign us
contributor access so now we should be able to connect our identity and then also get access to some stuff
um because what we really want is to do get a z like to just show that we can get
anything because right now we do this get ac subscription we're not
we're not um connected in any way so we'll go here and we'll do identity
fingers cross it just works yes it does that was nice if we didn't have to restart the machine
and so now if we do get ac subscription it should work
so i don't see it working do i type it wrong the term is not recognized oh i must have typed it wrong
subscription here let's get this description so if we were not authenticated that would not work
and so it does work now so we are pretty much all done here well we could probably set
some resources just to make sure this is working properly so maybe go get a z resource like that
there so list some resources and it's only listing what's in the um the resource group because that's all it
has permissions too it's not going to show things outside of that because that's the scope when we did it
at the resource group and that's why we went at the im level there so we are all done we will go ahead and close this out
okay we'll make our way back to our resource group we're already on it we'll go ahead and delete this resource group
say delete and there you go [Music]
hey this is andrew brown from exam pro and we are looking at virtual machines for azure cheat sheets so let's jump
into it so azure virtual machines allows you to create linux and windows virtual machines the size of the virtual machine
is determined by the image the image defines the combination of vpcus memory storage capacity the current limit on a
prescription is 20 vms per region vms are built at an hourly rate a single instance vm has an availability of 99.9
percent when all storage disks are premium two instances deployed into availability set will give you 99.5 95
availability you can attach multiple managed disks to an azure vm when you launch an azure
virtual machine other networking components will be either created or associated to your virtual machine
such as nsgs interface cards ips and v-net you can bring your own linux
by creating your own virtual hard disk azure vm comes with a variety of sizes that are also uh optimized for specific
use cases and there's a lot of different ones here uh so it's not super important but i mean you should know the broad
categories here uh azure compute unit acu provides a way of comparing uh cpu performance across azure skus and the
standard one that the one they're all benchmarked against is standard a1 all other skus then represent approxim
approximately how fast faster the sku can run on the standard benchmark you can install the azure
mobile app so you can monitor your vms on the go hyper-v is microsoft's hardware
virtualization product it lets you create and run software versions of a computer called virtual machines that's
how it all works there are two types or two generations of hyper vm generation one which works with most operating
systems and generation two that supports most 64-bit versions of windows and current
versions of linux and freebsc uh operating systems hyper v hyper vvms are packaged into
virtual hard disk formats such as vhd vhdx files not on the exam but just understand that uh hyper vm generations
are different from the azure hyper vm generation so it doesn't have all the features of hyper vm okay
there are three ways to connect to virtual machines the first is via secure shell it's not sure shell it's secure
shell but yeah so that'll have it get fixed at some point ssh happens on port 22
and rsa key pairs are commonly used for authorized access then you have rdp which is a graphical interface to
connect to another computer over the network connection this is how you can remotely connect to the windows server
via virtual desktop this happens on port 3389 on tcp and udp now i mark these red because i want you to remember these two
so make sure you know them okay azure bastions and we'll clear that because now it's getting a bit of a mess
here there we go so ask your azure bastion a service you can deploy
lets you connect to a virtual machine using your browser and azure portal supports both ssh and rdp useful when
you only have a browser like a chromebook or do not have permission to configure or install software then you
have update management this allows you to manage and install operating system updates and patches for both windows and
linux virtual machines that are deployed in azure on-premise or in other cloud providers update management will perform
a scan or update compliance a compliance scan by default is performed every 12 hours on a windows and three hours on a
linux machine that can take between 30 minutes and six hours for the dashboard to display updated data from the managed
computer so there you go that is the virtual machines chi chi [Music]
so let's take a look at arm templates uh and before we jump into it let's just make sure we're familiar with
infrastructure as code because that's what arm templates are all about and this is the process of managing and
provisioning computer center data centers such as azure through machine readable definition files such
as a json file and in this case an arm template because that's all it is a json file and rather than physical hardware
configuration or interactive configuration tools and just to simplify that even further basically you're
writing a script that's going to set up your cloud services for you and with iacs they usually come in two different
flavors we have declarative that's where exactly what you write is what you get and imperative so you're going to
generally define what you want and then the service will guess what you want it's going to fill in the blanks to save
you a lot of time writing those scripts and so arm templates are json files that are
that define azure resources that you want to provision uh and azure services that you want to configure and with arm
templates they are going to be declarative so you get exactly what you define uh they're gonna stand up tear
down or share entire architectures in minutes and they're going to reduce configuration mistakes and you know
exactly what you have defined for a stack to establish an architectural baseline for compliance so you
definitely want to use arm templates or iac whenever you can and try to avoid using the console unless you're doing
development or test workloads but let's just run through the big list of what you can do with an arm template i know
this is a big boring list but let's just get it get through as quickly as possible just so we know what we can do
with arm templates so with arm templates that are declarative so what you see is what you
get you can stand up tear down share entire environments minutes you can reduce configuration mistakes you can
establish an architectural baseline for compliance they're modular so you can break up your architecture multiple
files and reuse them they're extendable so you can add powershell and bash scripts to your templates uh it has
testing so you can use the arm ttk to test make sure exactly what what you've deployed is what you wanted
you have preview changes so before you create infrastructure via templates you see what it will create built-in
validation so it will only deploy your template if it passes track deployments keep track of changes to your
architecture over time policy is code so apply azure policies to ensure you you remain compliant uh and then you have
microsoft blueprints which we we did cover here uh which establishes relationships between resources and
templates so it's just one step further where arm templates don't have any relationship with the resources where
blueprints is like a better version of our templates and then you have ci cd integration exportable code so exporting
the current state of the resource groups and resources authoring tools i believe is the last on
our list here so you can use visual studio code that has advanced features for authoring arm templates makes it a
lot easier to write them so there you go [Music] so now we were talking about what arm
templates can do let's actually take a look at what one looks like and so just to get a good snapshot or overview let's
just define what the skeleton is the general structure of an arm template so here on the right hand side we have json
and you can see that we have a structure there so we have schema content version api profile parameters variables
functions resources outputs and let's go down the list here and see what all these things are so the schema describes
the properties that are available within a template and so the idea is that you have that
json link there and it's going to say we expect the schema the actual structure to have these parameters then you have
the content version this is the version of the template and you can provide any value for this element it's totally
abstract it's just your way of keeping track of the version of your current template then you have the api profile
you use this value to avoid having to specify api versions for each resource in the template you have parameters
these are values you can pass along to your template you have variables this is where you transform your parameters or
resource properties using function function expressions you have functions these are user-defined functions
available within the template you have resources these are the azure resources you want to deploy or update and then
you have outputs and these are the values that are returned after deployment so let's go
more into deep into some of these things [Music] so let's take a deeper look here at the
a resource section within your arm template and so a resource is an azure resource you want to provision could be
a virtual machine could be a database and so here on the right hand side look at where it says resources and we're
going to take a a breakdown of the actual columns or tributes that we can set so the first thing is the type and
this is going to follow the format of the resource provider and resource type so there you can see that we are setting
up a storage account then we have the api version and this is the version of rest api that we're going to use for
that resource and each resource provider publishes its own api version so you've got to go look up each one because they
all could be different then you have the name of the resource and so i believe there that is using a variable so that
is going to be dynamic then you have the location so most resources have a location property and that's the region
that you wanted to be deployed in then you have other properties and other properties can
will be based on different kinds of resources so for a storage account you want to be
able to set the kind and within the properties you're going to have a bunch of other options there and it's just
going to vary okay but anyway let's take a closer look next at parameters
so let's take a closer look here at parameters and these allow you to pass variables to your arm templates so on
the right hand side we have our arm template and we saw before
resources but if you look above there we have a thing called parameters and so there we are setting a parameter and
then down below in the resource we are accessing um a parameter and so
let's just actually talk about some of the parameter options we have so the first thing you'll want to do is set a
type and on the right hand side you can see that we're saying as a string and so we could define that as a string secure
string in blue object secure object and array and there are some other things that we
can provide when we when we are defining your parameters that we're not showing in that diagram but we have default
values so this is if you do not provide if there's no value being passed in this is what it should use then you have
allowed value so this is where it's an array and you're restricting it to only the values that are there then you have
the min value so if you let's say it's an integer you want to say this is the minimum value so it can only like
nothing lower than 10 then you the max value so nothing above 10 you have the minimum length so if you're working with
strings you might say it has to be a minimum of five characters then you have the maximum so let's say
you have a maximum of five characters uh then you have the description so this is just going to be something that when you
are using the arm template within the portal you know what it's for so if you don't have a great
description for that parameter or name and you want to provide more information that's what you can do so there you go
[Music] so let's take a look here at functions for arm templates and these allow you to
apply transformations to your arm variables and they come in two flavors we have template functions which are
built in functions and then user defined functions which are custom functions that we're creating to allow what other
kind of functionality that we want so the way functions work is you give the function name so here it's called equals
and then you have parentheses and you pass in uh what you want to transform so if you see parentheses that's how you
know that is a function so let's talk about some of the built-in functions and so there are a lot
available for us and they're generally self-explanatory so i'm not going to be showing examples but let's just go
quickly to the list so you have array functions like array concat contains create array empty first etc then you
have comparisons like less equals less lesser equals greater greater or equals you have date functions uh you have
deployment functions and for example parameters and variables are actually functions
so that's kind of interesting then you have a logical operator so end or if not or
numerical um functions like add div float int min max object so contains empty intersection and then you have
resources so extension resource id providers references etc like that uh
and there's one more i think we have string so we have base64 concat contains etc i'm not going to
show you how to do user defined functions i don't think that's that important
but i just want to show you that you have a lot of functions available to you okay
[Music] so let's take a look at variables and variables are used to simplify your arm
templates and the idea is that you're transforming parameter and resource properties using functions and then
assigning them to reusable variables so under your variables section notice that we have storage name and on the right
hand side we're just using a bunch of functions and assign it to storage name so the idea is that when we have to go
call them in our resources it's going to be a lot shorter than if we had to write that whole thing in there and
what's more interesting is that you can nest your variables within a json object so on the left-hand side you have this
variables environment settings test instance sizes etc and then on the right-hand side we have a parameter
that we're defined here and then down below you can see that we're calling that parameter and then we're calling
variables environment settings and then using the square braces to then call within that nested object i know that
seems really confusing but just take a moment to look at it and it will make sense and you'll just
understand that there's a lot of power there so there you go so last on our list here for uh arm
templates is outputs and outputs return values from deployed resources so you can use them programmatically uh this is
very simple the idea here is that you have outputs and let's say you want uh to output the resource id so you can say
that it's a string and in the value you could use a variable or a bunch of functions and the idea is that when you
use the cli so imagine here you are using the cli and you're saying show me this
deployment group and i want to see the output of value so see where it
says query says properties output resource id value the idea is that now i'm going to be able to pragmatically
access it after i've created the resource and this is a great way of chaining things so you might have a
bunch of arm templates you want to run in sequence and you're going to need the outputs to go into the next one and that
is what you're going to use outputs for so there you go [Music]
hey this is andrew brown from exam pro and we are looking at azure resource manager templates also known as arm
templates and this helps you deliver infrastructure as code meaning that when you have a resource such as a virtual
machine or a storage account instead of manually configuring it every single time through
the portal what you can do is provide a configuration file that defines all the properties that you want it to be
configured with and the idea is that you can keep this file and share with other
other people so they can easily create the same resources as you and then you know exactly how your stuff
is is configured so what we're going to do is uh launch a new template now you can't go up here and just type in arm
because these arm templates are managed at different levels so at one level is a
script subscription or they're resource groups so when you have a resource group you have deployments within them and
that's where these templates are deployed but just to deploy one from here what
we're going to do is type in deploy why they didn't make it so you can type in arm i do not know but if you go down
here we have deploy a custom template and so from here we have some common templates so if i click into web app and
i go edit a template we already have some stuff pre-filled in i'm just going to go back and discard that go back to
select a template and we're going to build our own and by default we'll have that schema that content version which
is 1.0.0.0 our parameters and our resources so today i want to launch a virtual machine
and what you normally would have to do is go here uh and look up what is that you want to create so if it's this uh
microsoft compute virtual machine you'd go through here and you'd have to make sure you have all these uh properties so
you define the resource here right the type and then you define the properties that
you want and down below you can go through here and see them all that's a lot of work i don't want to do that so
i'll go to add resource here drop this down and click uh where's it virtual machine where are
you there you are and i'm going to call this one wharf and then wharf and wharf because it's
not just going to create a virtual machine it's going to create the other things that i need with it as well such
as the storage account the network interface and the virtual network so you can see that we have a bunch of
parameters here so the name the type the name the admin username the password and the os version
oh you know what i think i chose a windows one i do not want a windows one i want a linux one
because that is easier for me to work with here so we choose ubuntu so i'll just fill this in again
all right and so um back up here you know we have the ubuntu version between some versions here and then there's the
type so that's for uh replication then we have variables here so if we go to vm size this is the vm it will it will set
here variables are either you can have string values or you can use functions to
transform other parameters into other stuff that you'll reference throughout your template then down below
we have those resources here so what we'll do is actually i'm going to copy this
because it's very highly likely we're going to want to make some kind of change and so i have vs code over here
on the on the left or right hand side i'm just going to paste that on in there and what we will do this is a json file
make things a little bit easier here great and what i'll do is just move that off screen and we'll go ahead
and we will save this and we'll see if we can deploy this so i'm going to type in wharf here and
we'll launch in canada east i'll name this wharf we will name the username warf but lowercase and then we'll do
testing one two three four five six capital on the t notice that it is hidden there
and then we will choose 14 which is defaulted here and lrs we'll go ahead and do review and create
and we'll hit create here so this is going to fail i already know because it has a misconfiguration it'll
tell us how but while that's going we'll take a look at our input so this is the values that were inputted these are the
outputs if we had defined any which we have not um and if we go back to our template i
just wanted to show you that we have that secure string so when we were typing our password that's why we didn't
see it so just things like that so i'll go back up here and our
deploy failed why what happened so we open it up here the requested vm size standard d1 is not available in the
current region so the template we have is not that great it needs some configuration because we
can't use d1 i think that doesn't exist anymore and so what we really want to use is the standard b1 ls
all right standard b1 ls so i'm going to cut that and for the time being i'm going to go
back to our original template and this is one big template i'm going to look for those variables oh
they're all the way at the bottom here nice and so i'm going to just go ahead and paste that in
b1 ls just double check making sure i spelled that right standard standard b1 ls looks good to me
so i'm going to move that off screen and the question is what do we do what do we do when a deploy fails so
let's go take a look at what has happened here so this all got deployed into a resource group and under here
this is where our deployments are so when we look at this template it we can see that it failed we
could click into here get the same information and if we click into here it just brings
us back to where we just were but if we go look at what was actually deployed under our resource group
under the overview we'll notice that it created the virtual network the storage account and the network interface so
when it fails it creates what it can but it doesn't roll back okay
so the question is is then how do you do cleanup so you might think i'll go to deployments and what i'll do is go ahead
and delete that template and we can go ahead and do that which by the way you can't edit this template all you can do
is all we can do here see i just want to show you that you cannot edit it
we can download it and stuff like that but so you might think well if i go ahead and delete that template
just making sure we're in the right place here you might think that might roll back
those resources but it doesn't it just deletes the template so if you really want to get rid of the stuff what you
got to do is go ahead and delete all these resources manually
so um i wish it kind of had a rollback feature but that's just how it is
but there are some nice things that uh azure does here which we'll talk about in a moment so i think we have adjusted
it to the correct value now so hopefully this is going to be all we need to make it work
so what we'll do is go to our deployments here and we can't do it here but so we'll go
back to the top here and type in deploy and we'll go to custom template and what we'll do is build our own
template in our editor and i'm just going to copy the contents here
okay we'll go copy and i will go paste
and we'll make sure that this is all good looks fine to me we'll go ahead and hit save
and we will choose wharf so we don't have to make a new one and we will fill in uh the name as worf
the username is warf i'll call warf2 just in case helps us keep track of what we're doing
here testing one two three four five six with a capital on the t
14 l rls lrs and we'll go ahead oh we have one issue here cannot deploy a resource group
worth deleting um we'll go back and we will hit create here
i don't think i deleted the resource group let me just go double check i almost i'm almost certain i deleted
all the contents of it right oh so there's already one here so we're just waiting for that to delete
just gonna go delete for us please thank you it failed to delete we'll go take a look
as to why resource is not found uh we'll go back to our resource groups
give us a refresh here okay so you know what i must have deleted the resource group which is
totally fine i i could have saw i only deleted the contents of it
but we'll just call this worth regular then we'll go ahead and hit
create here and so this time i have better feeling about this
and so we will just have to wait a little bit it won't take too long i'll see you back in a moment
okay so after waiting a little bit here our our thing seems to be deployed so if we go to resource groups we can see that
our virtual machine is deployed so that's pretty much all there is to it one other thing i'd like to show you is
that whatever you have whatever is in your resource group you can actually export the template so
if you did configure something manually all you'd have to do is find the resource
go up here to up it is export template and there's your template
so it just has that single resource in there i can't remember if if i go into here if i select multiples
um and i go x exper where's it export template look it's gonna include all that stuff
so if you already have existing resources that you provisioned and you want to have them that's what you can do
notice that some things won't be included in the template when you do that but you can just go ahead and
download them and then you have them for later so yeah that's all there really is to
arm other than learning the uh the nitty gritties of the actual language that's just how you work with it there so what
i'm going to do is make my way over to my resource group here and i'm just going to go ahead and
delete this here and we're all good to go [Music]
hey this is andrew brown from exam pro and we are looking at azure resource manager templates also known as arm
template so the thing we need to know is infrastructure as code this is the process of managing and provisioning
computer data centers through machine readable definition files such as json files rather than physical hardware
configuration or interactive configuration tools iec comes in two flavors we've got declarative which is
what you define is what you get and imperative so you say generally what you want and it will guess or it'll fill in
the blanks for you uh for arm templates uh they are json files as we're suggesting above there that want to
provision azure services you want to configure they are declarative so you get exactly what you define
an arm template is made of the following json structure you should know the structure pretty well because you might
see questions where they're showing you that code so be sure you know all these parts schema describes the properties
that are available within a template content version the version of the template you can provide any value for
this element api profile use this value to avoid having to specify api versions for each resource in the template
parameters values you can pass along to your template variables uh things you can transform parameters or resource
properties using functions and expressions uh and i did include them in this the cheat sheet but there is a
bunch of variations on variables and functions but i think that's practical skills you should just grab we don't
want this in a cheat sheet functions used to find functions available within the template there are so many functions
it wasn't even worth pulling them out and putting some here resources the azure resources you want
to deploy or update and under resources you've got type of resource api version the name the location other properties
which can be a bunch of different stuff so there's no consistency there and then outputs which are values that are
returned after deployment so you can do things with them pragmatically so there you go that is the arm templates cheat
sheet [Music] hey this is andrew brown from exam pro
and we're looking at azure container instances also known as aci and this allows you to package deploy and manage
cloud applications using containers or the way i like to think of it as fully managed docker as a service
azure container instances allow you to launch containers without the need to worry about configuring or managing the
underlying virtual machines and you're going to be able to design isolate containers for simple applications task
automations and build jobs let's talk about some of the uh reasons why you'd want to use containers over vms so
containers can be provisioned within seconds where vms will take several minutes containers are built per second
where vms are built per hour so you'll save a lot more money containers have granular
and custom sizing vpc vpcus memory and gpus where vm sizes are predetermined uh so those are the
benefits between containers and vms aci can be utilized for both windows and linux containers you can persist storage
with azure files using aci containers and honestly if you have containers or functions you have to have an external
storage mounted to persist that's just the way you do it acis are accessed via fully qualified domain
names which is one of the things i really appreciate about azure services because mostly services are like that
azure provides quick start images to help you start uh launching example apps but you can also source containers from
azure container registry docker hub and privately hosted container registries it looks like that the choice option there
let's just talk about container groups because this is pretty much the only major component you have to worry about
these are a collection of containers that get that get scheduled on the same host and the containers uh the
containers in a container group share life cycle resources local network storage volumes so the idea is that you
have these tightly coupled containers so all of them act as a service within that container group so here you can see an
example of a couple containers that are mounting azure files on different directories there and underneath all of
that is running on an azure virtual machine container groups are similar to
kubernetes pods yeah and it says similar to but not really the same thing multi-container
groups can currently support only linux containers which is kind of a bummer but that's just what it is and there are two
ways to deploy a multi-container group you can use arm templates when you need to deploy additional azure service
resources or just a yaml file when you want to deploy when your deployment only includes
container instances [Music] let's take a look at container restart
policies and what these do is allow you to change how the policies uh restart and there's three different ways we have
always uh never and on failure so the first one here always means always restart the container and the idea is to
keep your container running as long as possible and the reason why you'd want that is if you're running a web server
some other providers would call that a service then we have never so run only one time this is great for background
jobs and so other providers would just call this a task then you have on failures so containers
that encounter an error that's when it should restart
and so it's as simple as just choosing that option uh when you uh are creating that container
[Music] let's take a look at container environment variables also known as
nvars that allow you to pass configuration details to your containers and you can do this through the portal
the cli powershell which that's always the case with azure and so it's as simple as just putting in your uh key
and your value uh and one other thing i want to point out is that you can also pass in secured environment variables so
the idea is that sometimes you don't want them plain text so if you have like so you have that stripe secret key you
don't want anyone ever seeing that in production so the idea is that through and i don't think you can do this
through the portal but you can do this through the cli or powershell is provide the secure environment variables over
the regular one which isn't environment variables and that way you can pass it securely so it's never
exposed to human eyes [Music] let's talk about persisting storage and
we talked about that a little while there when we were looking at azure files but uh containers are stateless by
default so when a container crashes or stops all i'll state is lost to persist a state you need to mount an external
volume and there's quite a few different things we can mount as azure files secret volumes empty directory a cloud
git repo so you got a few options there and to mount a file volume you need to do this
via powershell or cli you're going to give the following detail so when you launch the container there's going to be
nothing in the portal so you got to do it this way all right [Music]
let's talk about some of the cli options that we can use to troubleshoot our containers a lot of this stuff is
probably uh accessible of the portal but it's great to know how to do this via the cli so the first thing is getting
the log so you can use azcontainer logs and that's going to bring you logs back then you can use az container attach
and this is going to give you diagnostic information about when the container is starting up probably with a cloud in it
information then you have a z container exact and this allows you to execute uh remote commands but what you can do is
use bin sh which actually will allow you to have an interactive container it's like having the terminal right to the
container which is very very very useful and the last is just going grabbing metrics
from azure monitor so azmonitor metrics list you could use the portal for that but i
just wanted to show you all those things because they might show up on the exam [Music]
hey this is andrew brown from exam pro and we're going to take a look at azure container instances so here it is so all
we got to do is go to container instances we'll hit add and the nice thing is that azure provides us with a
hello world one so it's very easy for us to get started it's a linux machine and it looks like
it's pretty inexpensive there so we'll stick with that i'm going to create a new group here we're going to call it
banana and we'll name the container instance banana
and east u.s 2 seems fine to me you'll notice we're on a quick start image if we wanted we could use something from
the docker hub and provide our own link but we'll just stick with the quick start image for today we're going to go
ahead and hit next to networking just to see what we have as options you can make it public or private we'll go to
advanced hold on here yep those are just the ports you can expose we'll go to advanced and for the
restart policy we can set on failure always or never we can pass in environment variables and i've covered
this a lot more in detail in the lecture content so we don't need to really dive deep into this
and we'll go ahead and create this instance and so we'll have to wait a little while
here and i'll see you back in a moment okay and so after a short wait our container instance is ready we'll go to
that resource there and take a look around so on the left hand side we can go to our containers and there we can
see it running we can see the events down below of what's going on so you can see that it's pulled the image
it successfully pulled it and it started the container some properties nothing interesting
there the logs if we wanted to see stuff and if we wanted to connect to the instance we could also go here and hit
connect which is kind of nice i don't have any purpose to do that right now so and it's also not going to
work the way we're doing it but i just wanted to show you you had those opportunities
uh you can do identity so that means manage it with role based access controls but what i want to see is
actually this hello world working i'm assuming there must be a hello page i've never looked at it
before so we're going to go here grab the public ip address and paste it on in the top and there we go so we have
deployed a instance onto azure container instances or a container i should say so nothing super exciting to talk about
here but we do need to know the basics uh there if we wanted to deploy other containers
it's just the one there so that's all you really need to do but yeah
so yeah hopefully that gives you an idea there i'll just go back to the list here so we can see it and we'll go ahead and
just uh delete that probably do it for the view the resources on the left-hand side like i always like to do
and we will go into banana here and we will delete banana and there you go
[Music] hey this is andrew brown from exam pro and we're looking at azure container
images cheat sheet and this is a two pager so let's jump into it so aci allows you to launch containers without
the need to worry about configuring or managing the underlying virtual machine acis is designed for isolate containers
for simple applications task automation build jobs containers can be provisioned within seconds where the vms can take
several minutes containers are built per second where vms are built per hour so you get greater savings when you're
using containers containers have granular and custom sizing vpc memory and gpu when vm sizes are predetermined
aci can deploy both on windows and linux you can persist storage with azure files acis
are accessed via fully qualified domains so it's just like a domain name then we have container groups these are
a collection of containers that get scheduled on the same host machine the container is a container group share
so you got lifecycle resources local network storage volumes container groups are similar to kubernetes pods
multi-container groups currently support only linux containers this is one of the cases where uh it's not windows it's
linux okay there are two ways to deploy multi-container groups you have uh that via an arm template and yaml file i
imagine you can do them via blueprints as well a container restart policy specifies what a container should do
when their process has been completed aci has three restart policies always never and on
failure okay we'll go on to the second one here so azure containers are stateless by
default when a container crashes or stops all state is lost to persist state you
need to mount an external volume you can mount the following external volumes so either azure file secret volume empty
directory or cloud git repo for a container tribute shooting these are just azure cli commands you should
really really know so i'm just going to list them out here you've got ac container logs ac container attached az
container exact which is for execute at az monitor metrics list so there you go that is the
aci images cheat sheet [Music] hey this is andrew brown from exam pro
and we're looking at azure container registry also known as acr which create and maintain azure canadian registries
to store and manage your private docker container images and related artifacts so azure container registry is a managed
private docker registry service based on the open source docker registry 2.0 and the idea here is that you can use
container registries with your existing container development deployment pipelines and it also has this thing
called registry tasks to build container images in azure and the idea here is that you can pull images uh from your
container registry to various deployment targets like kubernetes dc os which i forget what it stands for
but it's basically an operating system for containers docker swarm uh and there's also many azure services that
directly support acr so aks azure app service azure batch azure service fabric and a lot more so it's
just a matter of saying hey use this from acr and it works right away uh developers can also push to container
registry as part of a container development workflow with delivery tools such as azure pipelines and jenkins and
it's so easy to push to container registry there are many ways to work with acr via
the cli the power azure powershell azure portal azure sdk and there's also this really good extension called docker
extension for visual studio code something you absolutely want to install and maybe i'll make a full slide on it
because i like it so much [Music] let's take a look here at acr tasks
which allow you to automate os and framework patching for your docker containers and uh we have a few
different options here so the first is quick tasks these allow you to push a single container image to a container
registry on demand and you don't need to be running a local docker engine installation to do it so that's a really
good benefit uh you can also trigger some automated build actions so maybe you push some
source code updates you update the container base image you want to do it on a a timer that's on a schedule you
can also have multi-step tasks so you have to go for the various steps to complete that task
for each acr task it has an associated source code context this is the location of the set source files used to build a
container image or other artifact tasks can take advantage of run variables so reuse task definitions and standardized
tags for images and artifacts so there you go hey this is andrew brown from exam pro
and this along we are going to be working with acr or also known as container registries so type in acr at
the top actually that doesn't work type in container and we will go to the registries and we'll create ourselves a
new registry now list to have multiple container repositories or images hosted here on azure so i'm going to just type
in my acr and we'll say ok and the register name is going to be my acr i'm just going to put a bunch of numbers
here so maybe 8080 [Music] it probably doesn't like the hyphens
there we go all the names are crazy you just have to flip around to figure it out we're going
to choose basic because we can do everything we want on basic we'll go ahead and hit review
and create and we'll give it a moment to allow it to start deploying
and we'll just wait till this is done deploying our registry all right so that took less than five
seconds and we'll go ahead and go to that resource and so what we need to do is go to
access keys we're going to enable admin users so that we can actually log in via docker to our image so now what we need
to do is get some repositories going and if we go over to repositories tab i don't think there's much we can do here
i don't know if you can create repositories directly here i always just push
yeah so you don't you don't have a button to say create a repository like github you actually have to push to it
so i'm going to go to github and i'm going to create myself a new repository i'm going to call under the other
example we'll say my acr it's already there so i'll say new and
we'll go down below actually doesn't matter we'll just go ahead and hit create
and from there i'm going to open this up in git pod all you have to do to open something up in gitpod
is to attach this end here you can also do this in your local environment but you do have to have docker
set up and so that's why it's easier to use git pod now cloud shell in azure actually has docker installed but
doesn't run a docker instance so it's not that easy to work with so it really is easier to just work with git pod here
or locally if you have docker installed so we're going to say docker login and then we're going to provide the
login server we'll hit enter it's going to ask for our username so that is our username
there so i'm going to paste it in it's going to ask for our password so i'm going to copy that paste that on in
there hit enter and so now we can push to that repository i'm going to pull a image called hello world that's just a
standard application to say hello world if we do like docker run i t
d um hello world i think that's what it is to
type it whoops i can actually spell it right then we could probably run it over here
there's like a docker tab so we can see if it's running notice that it exited if we view the logs
it probably just says hello world yeah hello from docker so there you go that's a simple way of running a docker
container and terminal will be reused for other tasks
press any key to close it so i just hit space there i t d or t i think is for the tag name
so it's called hello world so we type docker images it shows up there in i's for i don't know interactive d is for
daemon so it runs in the background but anyway we want to push this to [Music]
azure so the way we're going to do that is we're going to type in docker tag because we have to tag this image
and we're going to choose the image id here and then we need to provide the repository name so i'm going to assume
it's the login server and then we just say example
hello world hit enter and we can say docker push and then copy
this here and that should push it over to azure
we'll go back over here give it a refresh so repository is there so
that's all it takes to push so this is one part but we'll figure the other part next here soon okay
hey this is andrew brown from exam pro and what we're going to do is use the last um
follow along in order to reference an image in our container here because we pushed an image but what if we want to
reference one as an application in an actual docker file because that is a very common use case so in our gitpod
environment which we still have i'm going to type in dockerfile and in here what we can do is reference
our image i always forget what the top line is export run import something so i'm going to just look for example i'm
going to go to um docker ruby because i know how to use this image
pretty pretty uh pretty good so we'll just go with it and it's from i always forget that so
we'll type in from and the idea is that we want to reference something like this but i know
if we use the hello world one it's a bit finicky to build so i don't want to use this one but i'm going to say ruby
or um yeah ruby's fine here and so we're going to pull the the ruby image so we say docker
pull ruby and we'll have to push this one so we'll pull it
give it a moment and now we'll type in docker um
images and we need to grab the ruby image id and we'll say docker
tag this one and we gotta go up and grab this part
here paste that in i could just say ruby here and then we can do docker push
ruby and while that's going i'm just going to go here and type in run
echo hello ruby because the idea is we just want to build a docker image that's from
a remote location so what i'm going to do is just save this so we'll say uh save my docker file
it's like the only file we saved here just waiting for that to finish pushing and then what we're going to do is we're
going to uh you know close this environment and then launch it again because i don't want to have any of
those images there i don't have to figure out how to delete them it's like docker rmi it can be really
finicky about deleting images but the idea is i just want to make sure they're not there so that when we pull we know
that it actually is pulling from there not locally we will have to do a docker login here
in a moment so type in we'll type in docker login
and we'll go back over here and we will log in say allow
we'll go back to here we'll grab the username we'll go back over to here
we'll grab the password so that's good and um hey where's our dockerfile
our dockerfile seems to be gone i guess we forgot to sync it and that's totally fine we'll just type it
in again it's not a big deal from here we'll say ruby and i'll say run echo
hello ruby and so now what i want to do is i want to do docker build period
and so if we are logged in and we can pull it so it's in great shape so it's the fact that we are pulling
that image and that was the other part i wanted to show you there so it doesn't matter if this finishes because it
obviously is working but we'll give it a second here to uh finish downloading there we go
um and so that's all so we'll go ahead and close this and we'll make our way back over to azure
and we will go and type in resource groups and we will choose my acr
and we'll go delete resource group and we'll delete that we're done so there you go
[Music] hey this is andrew brown from exam pro and we're taking a look here at azure
container registries cheat sheet so acr is a managed private docker registry service based on the open source docker
registry 2.0 it uses azure container registries with your existing container development or
sorry yeah development and deployment pipelines use azure or use acr tasks to build containers
images in azure pull images from an azure container registry to various targets uh and targets could be
kubernetes uh dc os which is i can't remember the name of the full name of that operating system docker swarm uh
for acr tasks this allows you to automate os and framework patching for your docker containers you can trigger
them you can trigger automated builds by source code updates updates to container
based images triggers on a schedule you can create multi-step tasks each acr task has an associated code context and
tasks can take advantage of run variables that's the end of it there you go see you in the
next section hey this is andrew brown from exam pro and we're looking at azure app services
which allows you to quickly deploy and manage web apps on azure without worrying about the underlying
infrastructure and specifically this is known as a platform as a service so azure app service is an http based
service for hosting web applications rest apis and mobile backends and you can choose your programming language and
it can either be windows or linux environment and it's a platform as a service so if you've ever used heroku is
basically the heroku of azure and so azure app service takes care of a lot of the underlying infrastructure for you so
it can do security patches of the os and languages load balancing auto scaling automated manager
and then um there's a lot of things that you can implement and so for integration such as azure
devops github integrations docker hub package management easy to set up staging environments custom domains
attaching uh ssl certificates so you can see that it just basically takes care of everything for you
if it's really hard for you or time consuming for you to do all this on your own in azure and the way it works is you
pay based on an azure app service plan honestly i find these really confusing um i really like how aws does elastic
bean stock because you're just paying for the underlying services but azure has all these crazy uh tiers here but
you have shared tier which is free and shared doesn't support linux you have the dedicated tier
which is basic standard premium premium version two and three and the isolated tier and so uh another thing i need to
note is that azure app services is not just for traditional vms for monoliths you can also run
docker single or multi containers and when you set up a project you're going to choose your domain name on the
azure websites.net obviously you can override that with your custom domain name but there you go
[Music] let's talk about azure app services runtimes and so let's define what
runtime is a runtime is software instructions that are executed while your program is running and so runtime
generally means what programming language libraries and frameworks you're going to be using and so runtimes for
azure app services are predefined containers that has your programming languages and probably some commonly
used libraries for those languages installed and probably commonly used languages for its web frameworks
and so the idea is that you're going to choose your runtime it could be net
net core java ruby which i'm very disappointed in azure because uh as of shooting this video
they do not support uh ruby for application insights but yet they have it here in azure app services node.js
php python and there's all the logos if you like seeing the logos and so azure app services generally has multiple
versions so they have like ruby 2.6 2.7 for php they have a lot of versions for node.js they have a lot of versions
but i just want to point out uh that it's pretty common for cloud providers to stop or to retire the old ones at
some point to stop supporting them uh you know that's just because they want to keep things modern
and the other thing is like it also helps you keep with your best security practices because really you should
always be trying to upgrade to the latest version uh for those security patches and and things like that so
there you go [Music] now let's say uh you wanted to use a
language that wasn't supported on um azure app services like you wanted to use uh elixir what you could do is
create your own custom container either for windows or linux uh and so uh you just go ahead and create your own docker
container in your local environment you can push it to an azure container registry and then the idea is you can
deploy your container image to your app service so i just wanted you to know that you could do that um if there's
some languages or other things or maybe you're using a language but you need uh some
bundle or packages that are uh they're just baked into the uh into the container so there you go
[Music] let's talk about deployment slots so deployment slots allow you to create
different environments for your web application and associate a different host name this is useful when you need a
staging or a quality assurance environment or maybe you need to like a developer environment any kind of
environment you want so think of it as a way to quickly clone your production environment for other uses and so down
below here you'd have your deployment slots and there's your slots to maybe you have apps staging beta so that's the
different host names there and so the idea is that not only do you
have other environments but there's also this thing called swapping and the idea is like imagine
uh you decide like you make a clone of your production environment and then you deploy the latest version to it and then
when you decide that it's good it's in good shape then what you can do is swap it out with your current production
environment and then just retire your old one this is called blue green deployment
and so that is a great way to do deployments and i just wanted you to be aware of
that so i wanted to talk about app service environment because i just wanted to show you how
azure app service is not just for a little toy apps or your small startup but they can be really useful for larger
enterprises so app service environment is an azure app service feature that provides fully
isolated and dedicated environment for securely running app service uh at high scale and this is
going to allow you to host web or windows web apps linux web apps docker containers mobile apps and functions
and app service environments are appropriate for application workloads that require very high scale isolation
and secure network access high memory utilization and again you know when you think of
platform as a service you don't think at this kind of scale so it's really impressive that azure does this
customers can create multiple ases within a single azure region or across multiple azure regions making ases ideal
for horizontal scaling stateless application tiers in support of high requests per second rp rps workloads and
ase comes with its own pricing tier and that's the isolated tier and ac can be used to configure security architecture
apps running ase can have their access gated by upstream devices such as a web application firewall also known as a
waff app or ssases can be deployed into availability zones
using zone pinning i don't know what that is a zone pinning but that sounds good to me
and there are two deployment types for ase we have external ase and ilb ase let's go take a quick look at what those
look like so here's a big diagram and look in the middle there that's where our azure
service environment is it's in our own v-net and a subnet and the idea is that you if the reason it's called an
external ase is because it exposes the esc hosted app on an internet accessible ip address and then down below
if you wanted to connect this this is generally what people are probably using for but what you want to do is you can
connect it to your on-premise via a site to site or express route vpn so that's something you can do with it
and the other part is like you can because the esc is within the same v-net it also has access to resources within
the v-net without any additional configuration so that's really nice and then the second uh a second one here it
looks it's exactly identical but there's one key difference and it has this ilb there and an ilb stands for internal
load balancer uh and that is basically the only difference so uh there you go
[Music] all right let's take a look here at uh deployment for azure app services and
let's just first define deployment so that's the action of pushing changes or updates from a local environment or
repository into a remote environment and azure app services has a lot of options for us and that's to me is the most
powerful thing about azure app services because it's easy to deploy a web app to a
virtual machine backed by a database but figuring out deployment is a very time consuming and tricky thing and azure app
services gives us so many options unbelievable amount of options it is so so great of them to do that for us so we
can run from a package deploy zip or or rar deploy via ftp deploy via cloud sync deploy continuously from like github or
use azure pipelines deploy using custom containers using docker hub azure container
registry deploy from a local git repo deploy using github actions deploy using github action containers or deploy with
an arm template so we're not going to look at all of these how they work but let's look at a few of them so we just
have an idea how robust of these options are the first one we're going to look at is running from a package and this is
when files in the package are not copied to the www uh root directory instead the zip package itself gets mounted directly
as a read only www route directory so basically all other deployment methods and app service
have to deploy to the following directory if it's if you're on a windows machine it's going to be home site wwe
root if you're on linux it's going to be home site www root and so since the same directory is used by your app at runtime
it's possible for deployment to fail because of file lock conflicts and for that and for that
reason uh the app might behave unpredictably because some of the files are not
yet updated so this is the reason why you might want to create a package um it's just because it it circumvents
the issue of just replacing files in the folder another method is the zip and rar
deployment this uses the kudu servers service that powers continuous integration based deployments and qd is
used for a lot of things but it's an engine behind git deployments and azure app services and it's open source
project that can also run outside of azure and so cuda supports the following
functionality for zip file deployments and it supports a lot more than just zips but it does deletion of files left
over from a previous deployment option to turn on the default build process which includes package restore
deployment customization including run deployment scripts deployment logs a file size of uh
was it two gigs two gigs and uh you can use it with the cli the api uh vrs with curl or the azure portal
uh so by the way that's that's you uploading a zip uh but you're essentially using kudu
underneath okay uh and so let's go take a look at another deployment method which is file
transfer protocol uh ftp has been around for forever and this is pretty much how people thought you were supposed to
deploy your apps in the late 90s and early 2000s i don't think it's a really great way to deploy but the point is is
that if you want to do it this way or you have a use case that makes sense you can do it so the idea is that the
deployment center you would uh say i want to use ftp and you just you get an ftp endpoint your username and password
and that's your credentials you use your ftp client to connect uh very old school but it's an option
for you which is really nice another way which to me is a bit bizarre but it's cool that you can do it is you can use
dropbox or onedrive to deploy using cloud sync so the idea is that you have dropbox it's a third party cloud storage
service onedrive's the same thing it's just microsoft's uh thing and so you go to deployment center
configure dropbox or onedrive and when you turn it on it will just sync or it'll just create a folder in your
dropbox drive and that will get synced so that's the one for onedrive that's the one for dropbox and this is going to
sync to that www route there i would have loved to take in some screenshots but i couldn't find how to
turn the service on but i know it exists i just was i thought it was so bizarre but yeah there you go that is the
deployment methods there [Music] so the way you pay
when you use azure app service is you need an azure app service plan and that's going to determine you know how
much you pay and what's going to be available to you they got three tiers which we're going to go uh through here
shortly and we did mention them earlier i honestly do not like this whatsoever this tells you this is a microsoft
product because it has these uh wonky pricing tiers i hope in the future they'll change it but that's just what
it is and so let's go learn it so basically what you do is you have like this big wizard uh that tells you all
the stuff that you can have it tells you what's included and stuff like that but let's work our way through it so the
first thing is the shared tier and there's two types here we got free and shared and so there's the free tier that
red one there it's called f1 it gives you one gigabyte of disk space up to 10 apps on a single shared instance no sla
for availability and each app app has a compute quota of 60 minutes per day so there you go you get some free tier
there then there's the shared tier this provides up to 100 apps on a single shared instance no sla availability each
app has a compute quota of 240 minutes per day uh and the thing is is that i didn't
know where the button is for that so like the next thing right beside it is the dedicated tier and i thought that's
what it would be because it says 100 total acus so
is it i'm not sure but anyway the point is there is a shared tier where it is i don't know uh and you can't use the
shared tier on uh on the on linux based system so you're using windows i'm using windows you gotta use bigger instances
anyway which i don't know i don't like that too much but that's what it is uh moving on uh over we're now into the
dedicated tiers and look it's right beside the free tier that's the green one it says b1
and if you expand it actually has a couple other tiers there so i just wanted to show you that there was uh
three uh there and so uh for dedicated tiers we've got basic standard premium premium version
two premium version three and we're looking at basically that's what that is more disk space unlimited apps three
levels in this tier that offer very amount of compute power memory disk usage it must be b1 b2 b3
uh and then the next thing over is uh the standard and i we had we had to switch tabs there onto the production
tab notice the terminologies don't really match the tiers uh and so uh with standard you can scale
out to three dedicated instances has an sla of 99.95 availability and three levels in this
tier that are offering varying amounts of compute power memory and storage disk and so then
uh that's that tier there and we're on to our last here which is the premium tier
and this scales to uh 10 dedicated instances it has availability sla of 99.95 percent and multiple levels of
hardware so that's the dedicated tiers then we're on to the last thing which is isolated and this is really only going
to be used i think for ase so the uh the isolated tier has dedicated azure virtual networks uh full network compute
isolation scales out to 100 instances availability sla of 99.95 and again i think it's just for those ases um but
there you go that is all the tiers and hopefully it makes sense to you but it is a little bit tricky to figure out
what to choose but you don't really get to like pick at a granular level that's what i don't like
but it is a really great service azure app service it does figure everything out
for you so maybe it's okay for you [Music] so let's say you're using azure app
services to host um one of your applications you need to run a random script how are you going to do that
that's where web jobs comes into play so it comes at no additional cost and web jobs is not yet supported for linux
which is of no surprise here because microsoft is all about windows but i hope one day they will support it the
following types they support as of today are command files bat files executables we've got powershell files bash files
php files python files javascript files and java files when you go ahead and create your job you're going to have
between two types you've got continuous and triggered a continuous means it's going to run continually until stopped
which is pretty clear and this particular mode supports debugging so if you need to bug in
that's the mode you're going to be using for trigger this only runs when a trigger occurs and you have different
kinds of triggered here so you notice we have scheduled and we can enter a quran job in and this will also expose a web
hook that can be called to enable scenarios like scheduling i think that would be like manual triggers
it doesn't support debugging though so looking at web job scales and this is for continuous only
uh we have a multi-instance and single instance so multi is will scale your web jobs across all instances for your app
service plan and single instance will only keep a single copy of your web job running regardless of the azure app plan
instance count so there you go [Music] hey this is andrew brown from exam pro
and we are going to be learning about azure app services in this follow along uh and it's a service that's supposed to
make it easy for you to deploy web applications i say supposed to because it really depends on your stack azure
has more synergies with other stacks than others so like if you're like me and you like ruby on rails
you're going to find a lot of friction with rails and linux but if you're using something like windows servers or python
or net you're going to have a much easier time still really great service just wish they'd make it a bit more
broad there but let's hop into it so before we can go use that service let's make sure that it's activated and so
we'll go over here and we'll go to azure subscription and then down below we're going to go to
resource provider now you'd think what you could do is just type in app services
and you'd be wrong because the the service is under a particular provider so if you want to figure out what
provider it is we can go um azure resource providers and they have a page on documentation
here that lists them all so if i search for azure app services it's under web and domain registration
so we're going to make sure this is registered if we're using a custom domain which we are not today we need
this one activated so going back here i will type in web and you can see it's registered so if yours is not registered
go ahead and hit that i believe this by default is generally registered with new azure accounts so i don't think that is
an issue for you but we'll go back up here close these additional tabs and we will type in azure app services
and we will look for that service so there it is and we'll go ahead and hit add
and so i'm going to give it a new name i just made it a moment ago but i'm going to try again and try to use the same
name so we're going to call this voyager great and then i'm going to go ahead and name this voyager and i already know
that that is taken so i'm going to type in delta flyer and these are fully qualified domains so
they are unique with azure app services you can run a docker container we're doing code this time around and what i
like to use is ruby but again you know if i want to use the ci cd i'm not going to be able to use
the deployment center with ruby so that is not possible and so we're going to go with python and
run either a flask or a django app i haven't decided yet i am in canada so let's go to canada east
and down below here we have the plans generally the plans will tell you the
cost underneath look you'll notice that it's loading but i just want to show you that there are some discrepancies in
terms of pricing so if i was to go to azure app services pricing and we were to pull this up here we can
kind of see the pricing here okay and if we scroll on down right now we're looking at a premium v2
uh and oh no i don't need help i'm okay you'll notice that it's 20 cents per hour so if i go here and do
that times 730 because there's 730 hours in the year that's 146 dollars i believe this is showing me in usd dollars yeah
and and here it's showing me 103 canadian which is lower um so it could be that because i'm running in a canada
east region it's the price is different but you can imagine that if i had this at this cost at uh what did we say here
at 146 usd to cad i'd actually be paying 182 dollars so you got to watch out for
that kind of stuff but i'm pretty sure this is what the cost is so just be aware that if you look stuff up in here
it's not necessarily reflective so you got to do a little bit more work to figure that out
if we wanted to go here we cannot choose the free tier when we're using linux if we're using windows i believe we can use
it we're working with linux today so that's just how it's going to be for the b1 this is totally fine but we
want to utilize deployment slots deployment slots is an advanced feature of the production version and that's the
only way we're going to be able to use it here this is 20 cents per hour again so i don't want to be doing this for too
long but i think what we'll do is before we do that we can just do an upgrade to dev to prod so we can experience that
i'm going to go and just choose b1 okay so go next we do not need any application insights
for the time being and it will not let us so it's okay we'll go next review and create
and we'll go ahead and create this resource here and i will see you back when this is
done so um our resource is now set up we'll go to resource and now that we're in
here you'll notice if we hit browse we're not going to see anything because we do not have anything deployed which
makes sense right so we're going to actually have to go ahead and deploy something so we're
going to make our way over to the deployment center and it's just going to tell us that we
have yet to configure anything and that's totally fine we're going to go to settings
it'll give it a moment and so the thing is is that we're going to need something to deploy
i did not create an app but the great thing is in the azure documentation they have
a bunch of quick starts here all right and apparently they have one for ruby as well but today we are looking at python
uh and so they actually have an example repository for us here which is github.com azure samples python
docs hello world and i mean i can go make a repo for you but we might as well just use the one that is already
provided to us so i'm just going to pull this up to show you what's in it it's a very very
simple application even if you don't know anything about building web apps i'm going to walk you through it really
easily here okay so we're going to open up app.pi so we are using flask if you've never heard of flask it is a very
minimal python framework for creating web apps uh very uninspiring uh home page here but it gets the job done it's
going to create a default route for us which uh we have there we're going to call hello here and we're going to have
hello world so that's all that's going on here very very simple and we have a
requirements this is our package manager i don't know why python uses txt files it's very outdated to me but that's what
they use and here we have flask all right so we're going to use that repo it's a
public repo so it should be very easy for us to connect so we'll drop down go to github
and uh the next thing we need to do is authorize github all right so i ran into a bit of trouble
there because i could not uh authenticate my uh github account but you know what i just made another github
account so that made it a lot easier i'm going to go ahead here hit github and we're going to try to authorize it and
so now i'm logged into this new one called exampro dev and we'll go ahead and authorize this application and we're
now in good shape this repository doesn't have anything in it so
if i want to clone something i guess i'll probably have to fork that repo so we'll give it a moment to authorize
and while that's going i think that's what i'm going to do i'm going to go and fork the example repo if i can find the
link again here myself i believe
it is that's still authorizing over there i'm still looking for it so
it was like examples or something samples or examples all right so i found a way around the
problem i just made a new github account so that's all i had to do um and i just won't be using my primary
account until i get my phone back but so what we'll do is go hit connect i'll hit authorize
and it didn't prompt me because it already connected to this new one called exam pro dev you might have to put your
credentials in here and it's going to ask me to select some things it's a new account so there are no organizations
there are no repositories there are no branches totally brand new so what i'm going to need to do is get a repo in
there so we'll just go ahead and fork the azure samples one so that is azure samples
python docs hello world if i type that right we're in good shape
i'm going to go ahead and fork this repository i'll say got it
and then i'll move this off screen here this is now cloned you should see it cloned here
and we'll go back here and this probably isn't live so there's no refresh button here so we'll have to hit discard
and we will give this another go here and we will select our organization which is our name there is the
repository uh should be main branch is kind of outdated i'm sorry but it's called
master that's what it is not my fault that's azure's fault okay um and i think that's it
i don't know if we need a workflow configuration file i don't think so [Music]
i'm just going to double check here no i don't think so and what we'll do is we'll just go ahead and save that
and so now we are set up for deployment [Music] all right so now that that's all hooked
up if we were to go to browse we're actually still seeing the default page a deployment hasn't been triggered just
yet so the way it works is it's using github actions so if we click into our we call
it main branch i know they got the wrong name but we're going to click into our github workflows and then below here we
can see we have a yaml file and this is for github actions integration here and so what it's doing
is it's specifying the branch what how it's going to build it's going to run on ubuntu latest the steps it's
going to do is going to check it out it's going to set up the python version it's going to build it it's going to do
that stuff and so in order for this to um take action we'd actually have to go ahead and make some kind of manual
change which we have yet to do so eh so what we'll do is we'll go back to our main here
and uh it should be as simple as uh just changing something here so it's not i'm not sure how it's
supposed to know that it's supposed to be doing the hello oh i guess yeah sorry so this means it's going to route over
to here so i'm just going to make any kind of change here doesn't matter what it is
just one space we'll go ahead and give it a commit and
if i go back to my latest commits we should see that i made that change there it is
we'll go back over here and this should be deploying so if we go over to logs
here you can see one's in progress right now okay and so that's what we're weighing we're just going to see that
finish there we could probably open the logs and get some more information there and so it just brings you back over to
github actions and so here's github actions and it's performing the stuff here so we're just going to give it time
here and i'll see you back in a moment so we didn't have to wait too long it only took one minute and 29 seconds if
we go back over here um we might need to do a refresh and so we can see this is reflected over here
and so if we go back to it doesn't really matter if we go to settings or logs here but i'm going to hit browse
and see if my page is deployed it still is not so we do have a small little problem here and it's really going to
just have to do with how the app is served so that's what we need to figure out next
all right so our app is not currently working and there's a few approaches we can take and the thing i can think right
away is we should go an ssh into that instance if you scroll on down here from developer tools you can go to ssh and
click this button and that's going to ssh you right into that machine right away you can also
access ssh via the um cli command so i believe it's like it's like a z web app
um ssh it'll do the exact same thing you do that from the cloud shell but that's not
what we're doing today if i give this an ls in here and we're in linux we can see we have our app here
and what i would do is i would see what's running so i i would do a puma uh or sorry not puma
ps ox grep uh python and you can notice that we have uh g unicorn that's running so that is where our python instances
are running so you're not looking for flash you're looking for python here and if we wanted to make sure that was
working we just type in curl local host um and so that is going to return a port
80. so that tells me that because like curl just means like let's go look at that page it should return some html
like print out the html to us so that means the app is not running so what you could do is run flask run
and it's going to start on port 5000 right so what i can do
is i can go up uh back to my deployment center here and i'm going to go get that link here
and just ignore the fact that it's working it's it's not working right now i know for certain it's not
but if we do 5000 that won't resolve because port 5000 isn't open so we can't really just
put 5000 in there and the default server here would be 5000 so if i stop this and i specify port 80
right then this will start up the app on port 80 and so now when you go here
okay it will work this is not a great way because of course as soon as you kill it here uh technically the site
should stop running um and so you'll run into that step uh so what we need to do is provide a configuration to g unicorn
which is a python thing again it's not so important that you know how like what these things are but the idea is that
you understand as administrator you want to make sure you have an app that runs after you do a deploy and so in this
particular one we need a startup.txt and interestingly enough there is a example code by the same author of the
other one we were looking at here i believe it's the same person or it might not be but they have a
startup txt right and so in here you can see that it binds on port zero zero zero it starts up four
workers starts up the app all right and so that's something that we can go
ahead and do so what i will do is i will go back to my
github repository that we have here and i can just go ahead and add a new
file so i'm going to say add a file create a new file here we'll call it startup.txt
i'm going to copy this command here and paste it in there so g unicorn will bind the workers and start up on the app
um startup app is being ran by something here so if i go back here i think they have a startup pi here
and that's all that it is doing um i think i want to i could do it this way i suppose let me
just see here there's just a slightly different egg so they actually have like a full app going
on here and i just want a very simple flask app so i think what i can do
is put flask run here port 80 and that should start up the app there
i'm going to go ahead and commit that file okay and as soon as i commit that if i
go back to my actions it created that startup file there so it should trigger a build
it's queued up um and i'll just put this tab up here so we'll be back here in two seconds and if i give this a nice
refresh yeah you can see it deploys in progress so
this doesn't take too long we'll just wait close that there we'll just wait a few minutes we click logs it just opens
it back up here and we'll see how that goes all right so your deploy may have
finished there but the thing is is that we're not going to really know if a change has taken effect unless we
actually go ahead and update our code so what i want you to do is go to your code tab go to your app.pi we'll hit edit and
i'm going to go ahead and change this to vulkan and then we'll scroll on down hit commit
changes and we'll make our way back over to our deployment center and we'll give it a refresh here and
we're just going to wait until this one is complete and we will double check to make sure that that has changed if it's
not we will take action to fix that okay all right so we just waited a little while there for that deploy to happen
and if we go to our website here it is taking effect so that's all we had to do to get it working so that's pretty good
um so that is deployment [Music] so let's talk about deployment slots in
order to utilize this feature we're going to actually have to upgrade our account because we cannot utilize them
at this uh the basic plan here we got to go to standard or premium so let's go ahead and give that an upgrade
so here's the b1 we're gonna go to production here um and i think
yeah we're gonna have to choose this one here uh very expensive so the thing is we're gonna just upgrade it temporarily
unless there's more options down below that are cheaper yeah these are the standard tiers
let's go with this one here because it's only eighty dollars again we're not going to be doing this for long but i
want to show you how to do staging slots and auto scaling okay so we'll go ahead and apply that there
and now it says that it's applied so if i go back to our app here and we click on deployment slots sometimes it doesn't
show up right away if it doesn't that's not a big deal you just wait a bit but today it's super fast so we're going to
go ahead and add a new slot we're going to call it staging we're going to deploy from our
production branch here and i'm going to go ahead and create that there and we'll just wait until that's done
okay great so we waited a little bit there and our slot is created so i'm going to
just hit close there and so now let's go take a look and see if we can actually see the application
here so i just clicked into it i click browse and we're getting the default page so nothing is actually really
deployed to it so how are we going to do that that's the the main question here
so what i'm going to do is i'm going to make my way over to the deployment center
and you can see that it's not configured for this slot so we're going to have to set it up all
over again even though it copied over configuration settings it didn't copy over the code so we go to github we'll
choose our organization again i'm going to choose the repository we're going to choose
that main branch again there we're going to let it add a workflow and notice that this time it's going to call it
staging.yaml so there'll be a separate workflow that gets created we're going to go ahead and save that there
and what we can do is again click onto our branch name there and if we click into our workflows we'll
note now notice that we have a staging example it's the same thing but it should be able to now deploy so
the whole purpose of these deployment branches is that it helps us
we can deploy different versions of our apps but also it's just a place where we can
view things before we actually roll them out so we want to make sure 100 that they are working correctly
i don't think this will automatically push out let me just go to my actions to see if this is deploying notice that we
have two workflows now we have staging here uh and yeah it looks like it's going to
deploy here so we'll just wait a little bit but maybe what we can do is try to have
a slightly different version uh for each one here okay but we'll just let that finish and i'll
see you back in a moment all right so our deploy finished there so now if we go back to our website here
we go browse we should see that application it says hello vulcan and if we go and take out
this we still have hello vulcan so how can we have a a variant of this so that we can push
out to that so what i'm going to do is i'm going to go back to my application here i'm going to go to code and i'm
just going to make a minor change um i don't say also does that spell right startup
doesn't look correct to me um so maybe i'll go and adjust that file but it doesn't seem to be affecting
anything which is i'm a bit surprised there so what i'll do is i'm going to go and edit that file
and give it the proper name can i rename this file yes i can so we'll call that startup
file i thought we needed that for deploying i guess it just works without it which is
nice if we go back here i'm going to go and actually just want to edit my
app here again and i'm going to go and edit this and we'll say
hello pandoria or hello and dorians maybe
and so if i go back to my actions the question what is it deploying is it going to deploy the production or the
staging and it looks like it's going to do both
looks like it's doing both here but one way we could tell is we can go to our logs here
and we can see that um so we did two deploy so there's one change here if we go back to our main application
our deployment center here we go over to our logs you can see that they're both deploying
so it doesn't seem like it's a great thing that that's how it works so the question is then how would we
um facilitate that deploy right how could we do that i suppose what we could do is just make a separate staging
branch so if i go over to code here i don't think we can just make branches
through here so what i'm going to have to do is go ahead and oh i can create a branch right here so
we'll just type in staging and we'll go create ourselves a new branch
and now we're in this branch and what i'm going to do is go ahead and modify this and we're just going to call this
hello klingons okay we'll go ahead and update that and so
this should be a separate branch so you think what we could do is go in and just change our settings so that it deploys
from that one we'll go back to our deployment slots we'll click into staging here
and we need to change our configuration settings um i think we could just do it from
here hold on here i could have swore it specified the branch if we go to deployment center here
i think it's set up on that other branch there i think we just adjusted here so yeah i
think we could just adjust these settings we can't discard them
but maybe what we can do is just go in and modify that file so we will go into our code here
and we will go ahead and click into here go into staging and we'll just change what
the branch is called so we'll just say staging and we'll hit start commit and we will
save that and we'll see if it actually reflects those changes there so we will go here
and hit refresh we'll see if it picks up staging now if we go to settings
it's not picking it up so um i'm not sure i don't think perform a redeploy operation we don't want to redeploy so
maybe what we'll do is just we'll have to do a disconnect here because it's collect it has the wrong one here so
save workflow file um okay we'll just go ahead and delete it
it's not a big deal we'll just have to make a new one here we'll go to github
we'll choose our organization again or repository our staging branch this time around
we'll let it add one see it says we could use an available workflow so we could have kept it there and added it
there um and we'll go ahead and save that so now we'll have two separate branches
there and we'll give that some time to deploy because that will now trigger a deploy off the bat and so i'll see you
back here in a moment all right so after a short little wait here it looks like our app is done
deploying so we'll go over here we'll make sure that this is our staging server is good and we want to see that
our production is different perfect so we now have a way to deploy to each one but imagine that we want to swap our
traffic so we're happy with our staging server and we want to roll that out to production and that's where we can do
some swapping so what we'll do is click the swap button and we're going to say the source is the staging and this is
our target production we're going to perform that swap right now we can't do a preview because
we don't have a particular setting set that's okay and it's kind of showing if there are
any changes so set of configuration changes we don't have any so that's totally fine as well we'll go ahead and
hit swap and that's going to swap those two i believe it has has zero downtime so
we'll be in good shape if that happens there and we'll just give it a moment to do
that great so after a short little wait there the swap is complete and so if you
remember clearly this was our production right and so if i was to hit refresh
it so now say klingons and if i go to my staging server it should be the other way around right
good so now imagine that i want to just split the traffic that's something else that we can do um so notice over here we
have these percentages here i'm not sure why it won't let me change those
so maybe i'll have to look into that so i'll be back so i'm not sure why it's not showing us
that traffic slot there but what i'm going to do is just maybe try to trigger a deploy back into our staging and maybe
that's what it wants to see so what i'm going to do is go back to my code here we'll be in our staging branch
here i'm going to go ahead and edit this file here and
we'll just change this to majoran's and we will hit update and we will let that go ahead and deploy
so if we go to actions here we can see that it is deploying um and we'll just give it some time okay
so we'll see you back here in a bit i mean the other reason could be that we're just not at the main level hold on
here if we go back here to deployment slots you know what i think it's just because
i was clicked into here and then i was clicked into deployment slots that they're both grayed out yeah it is so we
can actually do at that top level there it doesn't hurt to do another deploy though so um we'll just wait for always
for that deploy to finish and then we'll come here and adjust that there okay all right so let's take a look at doing
some traffic switching here so right now if we were to go to our production we have klingons and if we were to
go to our staging we have bajoran so imagine that we only want 50 of that traffic to show up so
what we can do is put in 50 percent and uh what i'm gonna do is um do i hit enter here or oh sorry save up
here there we go um and so what's going to happen is this should take effect i think right away
yep uh and so now we have 50 50 50 chance of getting something else here um so i'm just going to keep on hitting
enter here if that doesn't work we can try an incognito tab and there we go we got the opposite there and so this is
serving up staging right and this is serving up production but they're both on the production url so
that's the way you can split the traffic so that's pretty much all i wanted to show
you for deployment slots let's now talk about scaling [Music]
all right so let's take a look into how we can do some scaling with our app service this is only available if you
have beyond standard and or standard and beyond so standard and premium and etc so if we just search for scale we have
two options here we have scale up and scale outs the scale up is pretty straightforward that just means to uh
make our instance larger and so we already did that when we upgraded from our standard our our b1 over to our s1
here right so if i was to go here i'm not going to do that but if i was to do that and upgrade um
that would be scaling up right and notice that we're talking about scaling so right now we're limited to 10
instances which is totally fine but now let's take a look at scaling out so if we go to scale out here
and go to custom auto scale what we can do is we can scale based on a metric so
we can add or remove servers based on the demand of the current web applications traffic so we're only
paying for servers when we need them and so we have a minimum one a maximum two that seems fine to me but we're going to
add a rule here and i want to scale this on the maximum time we're going to do on
cpu percentage i just want to have a very easy way to trigger this so we can see a scaling event in action here it
has a maximum of 16 i might just lower that down even further well let me type in there new so 16 is what it's going to
have to be it's not a big deal but i am going to reduce it down to actually i think sorry i don't know i was going
here the the metric threshold to scale an action i'm going to put it at 10 sorry
okay so here's that line and so we have a and i like how you can drag it but you kind
of have an idea that we have a high chance of having this um trigger i just want to do this so that we have a
a good chance so if i was to put it here you can notice that it's very easy for us to spike our traffic and and cause a
scaling event now i'm gonna set the duration to one minute so we have a much higher chance of uh triggering that
there okay set a duration less than five minutes
may generate a transient spikes yeah that's fair but i mean i just want to show you a trigger happen
and we need a cool down time probably um and it's set to five minutes that's totally fine we're gonna add one
and that looks fine to me i'm gonna set this to maximum okay and so now we're very likely to trigger that there we'll
go ahead and hit add that instance there and so now that we have that we're going to go ahead and save
that and so now that that's there what we want to do is actually trigger a scaling
event and so where we're going to see that is under the monitoring tab
so if we go to monitoring and what we're going to do is go over to um it should be in
sorry i forgot the place where we need to go take a look here is actually in the run history here so if we go here
and check one hour we can see how many instances are running uh and i think if i dial it back here it
should show me over time as it changes we do have a scale up event that has happened which happened i guess four
minutes ago um so it gives you kind of an idea of how many instances are running which
right now are two um so maybe our maybe our scaling event is not in the
best uh use case there because it's happening too frequently so what i'm gonna do is go ahead
and modify that scaling rule um and so i'm just gonna go back and click here and maybe we'll just make it
so it is less aggressive so what i'm gonna do is just change it so it's over the duration of five minutes
and i'm gonna just put it right above here so that it goes back to one okay
and we'll go ahead and save that and so now we go back to our run history here
it still shows that it has two as you can see here but i want to see
this drop back down to one so it's going to check every five minutes or or within the span of five
minutes so what i'm gonna do is just uh wait here and i'll see you back in a bit until we see a scaling action happens uh
here okay yeah i'm just sitting here waiting for it to scale down and i don't see it
going so it makes me think that i need to go ahead and set a scale scale down action let's take a look at
the one that we currently have uh so this one is set oh you can see it's still spiked we don't even have anything
going on here but what i'm going to do is just be really aggressive here i'm going to say when it's
50 okay and so here we'll go back here and i'll save that
and i just want to see if it scales down i shouldn't have to set a scale down action should just go
um and what i'm actually going to do is be a little bit more aggressive i know i'm
setting a lot of stuff here but i'm going to just set it to duration of 1 minutes
so we can see this a lot sooner and we will go back to our run history here and we'll see if we observe a scale
down all right so um it's not scaling down here but uh i think it's probably
because i need to scale out action so what we'll do is go ahead and add a new rule
this thing if we go here and we just look at it um
it's not going to decrease it unless we have a scale of action so
i don't think it's necessary for us to set one here i think you get the idea but that's for scaling so we're all done
here with azure app services all we got to do is go ahead and go ahead and delete it so let's go ahead and delete
our app here okay so there's a few different ways we can do it i'm going to do it via resource
groups i believe we called it voyager here so click into that
and i'm going to go ahead and delete the resource group and here is all the related services
and so i will type in voyager and there we go great and so yeah there we go
[Music] hey this is andrew brown from exam pro and we're looking at azure app services
cheat sheet this is a two pager so let's jump into it azure app services is an http based service for hosting web
applications rest apis and mobile backends you can choose a programming language in either a windows or linux
environment it's platform as a service so it's the heroku or aws elastic beanstalk equivalent of azure if that
helps you remember what it is azure app services makes it easy to implement common uh common features
so for integrations and features such as azure devops github integrations docker hub integrations package
management easy to set up staging environments custom domains attaching tls and ssl certificates you pay based
on the azure app service plan so you got a standard tier a dedicated tier and an isolated tier notice
that this tier does not support linux okay azure app services uh supports the
following runtime so net.net core java ruby node.js php and python azure app services can also run docker as a single
docker container or multi-containers you can also upload your own custom container so you'd upload it or you just
deploy it so you create it upload it to azure container registry and then deploy you have deployment slots this allows
you to create different environments for your web application you can swap environments this could be how you
perform a blue green deployment onto the second slide here app app service environment asc is an app azure app
service feature that provides a fully isolated dedicated environment for securing or securely running app service
apps at high scale customers can create multiple ases within a single azure region or across multiple azure regions
making ase ideal for horizontal scaling stateless application tiers or high requests per second rps workloads ac
comes with some pricing tier which is isolated tier ase can be used to configure uh security architecture apps
running on asc can have their access oop this should say granted not graded by upstream devices such as a web
application firewall a waff and app service ascs can be deployed into an az using a zone pinning there
are two deployment types for ase you have external ase and i r i l b asc ilb would be internal load balancer okay
azure app services provides many ways to deploy your applications and there's so many i'm not going to listen here but i
i probably would recommend that you go review the actual uh content for that because it just they would have made it
would have made a whole cheat sheet here uh then the last thing here web jobs is a feature of azure app services that
enables you to run a program or script in the same instance as the web app api mobile app there's no additional cost to
web jobs so there you go [Music] let's take a look at what a key value
store is so a key value store is a data store that is really dumb but it's super super fast okay and so they'll lack
features that you would normally see in relational databases like relationships indexes aggregation transactions all
sorts of things uh but you know there is a trade-off for that speed okay and so here is kind of a representation of a
key value store which uh you have a key which is a unique you know key to identify the value and i'm representing
the value as a bunch of ones and zeros because i want you to understand that there aren't really columns it's just
key and value so the idea is that imagine that those ones and zeros actually represent
a dictionary and that's usually what they are is it associative array hash dictionary underneath
okay and so even though it looks like you know what i mean like if this was a relational database you know you could
see these as kind of like columns and so if we kind of did that that's how a key value store can kind of mimic
um you know a tabular data right but the thing is is that you know there is no consistency between the the rows
hence it is schema-less but that's kind of a way to get tabular data from key values but due to their simple design
they can scale well well beyond relational databases so relational databases it becomes very hard to shard
them and do a bunch of other stuff with them but key value stores are super easy to scale but you know they come with a
lot of extra engineering around them because of these missing features [Music]
all right let's talk about document stores so document store is a noscope database that stores document as its
primary data structure a document could be an xml but it's most commonly json or json like structure and documents
stores are sub classes of key value stores so the components of a document store compared to relational database
looks like this so the idea is that you have tables is now collections rows are documents columns are fields indexes are
still the same name and when you do joins they're called embedding and linking so you know if a key value store
can kind of store this why would you do it well there's just a lot more features around the documents itself and so you
know how we saw key value store didn't have like it had like nothing like no functionality well document sort brings
a lot more of the functionality that you're used to in a relational database you know and so it makes things a little
bit easier to work with okay [Music] all right let's take a quick look here
at mongodb which is an open source document database which stores json-like documents and the primary data structure
for mongodb is called a bson so a binary json is a subset of json so its data structure is very similar to json but
it's designed to be both efficient and storage in both storage space and scan speed
compared to json and bson has more data types than json has date times byte arrays regular expressions md5 binary
data javascript code json's just strings integers and arrays it's very very simple but because it has all these new
other data types and it's stored in this binary format it's not plain text it's actually binary data
that's the what reason why it the storage space and the scan speed is so fast now if you did use javascript to
perform an operation like say insert data this is what it would look like so you have kind of an idea that you're
inserting items into a collection there okay just to list out some features of
mongodb it supports searches against fields range queries regular expressions it
supports primary and secondary indexes it's highly available it's it's high availability can be obtained via rep
replica sets so replica to offload reads or access standby in case of failover momentum b scales horizontally using
sharding mongodb can run multiple servers via load bouncing mongodb can be used as a file system which is called
grid fs with uh with load balancing and data replication features over multiple machines uh for storing files mongodb
provides three ways to perform aggregation uh grouping dat and aggregations just grouping data to
return a query so aggregation pipeline map reduce single purpose aggregation mongodb supports fixed collections
called capped collections going to become claims to support multi-document asset transactions so mongodb when it
first came out didn't do all this stuff and people complained about it i like it being very hard to scale but now it's a
lot easier to use so you know mongodb is something that is uh more uh a more popular option nowadays than it
was a few years ago so there you go [Music] all right let's take a look here at what
a graph database is so graph database is a database composed of data structures that use vertices nodes or dots which
form relationships to other vertices via edges arcs and lines so some use cases here fraud detection real-time
recommendations engines master data management network and it operations identity and access management and
there's a lot they're saying like it's really really good for that i am something i want to look into later
traceability and manufacturing contact tracing data lineage for gdpr customer 360
degree analysis like for marketing product recommendations social media graphing and feature engineering for ml
so let's just kind of break down you know the little components here so what you'd have is a node and a node can
contain data properties and then through that it would have a relationship through an edge and that relationship
can have a direction and also data properties on it and so it's a lot more um
verbose like in than a relational database and also just how it can point to stuff so
super useful for particular use cases let's take a look here at azure tinkerpop which is a graph computing
framework for both graph databases oltps and graph analytics systems olaps so tinkerpop enables developers to use a
vendor agnostic distributed framework to traverse query many different graph systems they always say traverse because
there's so many it's a tree right so there's a lot of databases that this thing connects to and so here they all
are but the ones i want to indicate to you that are important is amazon neptune cosmodb hadoop via spark neo4j which is
one of the most popular graphing databases orientdb and titan okay so the thing is is that this isn't
a graph database it is a basically adapter to other graph
databases and ticker pop includes a graph traversal language called gremlin which is the single language that can be
used for all these graph systems so let's talk about gremlin gremlin is a graph traversal language for apache
tinker pop and so it looks like this and sometimes you know like even without tinker pop and i think this is with
cosmodb that they'll support this language by default so you don't necessarily need to have tinker pop to
work with some databases but it's great to have that service if you if or like the framework if you need it so gremlin
is is designed to write once and run anywhere w-o-r-a gremlin traversal can be evaluated as a real-time query so
lltb or a batch analytics query so over here it's just kind of showing you these are the oltps graph databases over here
and then on the right hand side we have olaps okay and so gremlin hosted language embedding
means you can use your favorite programming language when you write gremlin okay so there you go
[Music] hey this is andrew brown from exam pro and we are looking at azure tables which
is a type of storage for nosql key value data store within the azure storage accounts azure table stores
non-relational structured data with a schema-less design and there are two ways to interact with azure tables
through the storage table storage api or microsoft azure storage explorer which i find is the easiest way to interact with
it so just kind of looking at storage explorer there if you wanted to add an entry uh you'd have to provide a
partition key which is a unique identify uh fire for the partition with a given table and a row key a unique identifier
for an entity within a given part a partition and so you have all your data types here so we see string boolean
binary data type uh double uh guids 32 and 64. if we wanted a query you'd have to query along the partition
and row key so you could also do some additional filtering here so just notice here that um you know you have your
partition key you put in your value like klingon and wharf and then this is not this is just additional properties you
added a lot of time the way these key values work is that
this will return the results like all the results and then server side and then client side these will be filtered
client side i don't know if that's the case with azure table but that's generally how these things work and so
there you go hey it's andrew brown from exam pro and we're looking at cosmodb which is a
service for fully managed nosql databases that are designed to scale and be highly performant so cosmodb
supports different kinds of nosql database engines which you interact via an api so we have the core sql which is
their document data store their azure cosmodb api for mongodb their azure table
and gremlin okay and this will be using probably tinker pop um so all of these nosql engines uh uh specify capacity so
you can do provision throughput for pay for guarantee of capacity or serverless pay for what you use so if you are just
playing around the service you can go ahead and choose that serv serverless option and so a lot of times
when people talk about cosmodb they're usually talking about core sql so if you say cosmodb it's usually document but
understand that there's a bunch of stuff underneath it now if you want to start viewing data
and making stuff and playing around with it you'd use the cosmo db explorer which is a web interface that you can find at
cosmos.azure.com so after you made your cosm db cluster or container whatever they call it
then you can go access your database so here we have the sql api and so that would be the document store
and you can just see here that we have we've created a new item here for that data okay
and so i just want to show you that if you drop down here you choose container or database so we create a new container
also if you are in azure it looks like they just have it here under the data explorer tab so it's
the same thing it's the cosmo db explorer just in line okay so you don't have to like go to that url you could
just click into your um your it's called account cosmodb account and go to data explorer i just wanted to show you here
like if you made a graph database that you can do everything through this explorer for all the different types the
interface will change a bit so here we'd add a new vertex right and it's just slightly different okay
[Music] all right so the thing about azure tables is that you can
use it within either cosmodb okay or you can use it within account storage and the thing is is that um
it's a really good comparison to look at these two things because this way we can really understand like
how powerful cosmodb is all right so what we'll do is compare the two so over here when you have azure tables in
account storage it's fast but it has no upper bounds of latency
for azure cosmodb it's going to give you single digit millisecond latency for reads and writes
for throughputs it's variable throughput it's limited to 20 000 operations you get a guaranteed
uh backed by an sla and no upper limits when you're using cosmo db for global distribution it's a single region
and for cosmic db you have 30 plus regions for indexing you only get the primary index or partition and row no
secondary indexes and then for cosmodb you get automatic and complete indexing in all properties no index
management for querying you get query execution uses index for primary key and scans otherwise and for uh cosmodb you
get queries that can take advantage of automatic indexing on properties for fast query times for consistency we got
strong with primary region and eventual with secondary regions and with uh cosmodb there's like five you know what
i mean there's this uh the consistent levels are a lot more flexible okay for pricing it's consumption based and
then for uh cosmodb you have consumption based or provision capacity for this delays it's 99.99 availability and here
it it's backed by an sla but some conditions it does not apply okay so you know hopefully that shows you that
cosmic db like is very performant is globally available uh single digit millisecond and i i really feel like
this is to compete with um adabus dynamodb because it sounds so similar to dynamodb but um yeah there
you go [Music] hey this is andrew brown from exam pro
and we are on to the azure tables and cosmos db cheat sheet for the dp900 i want to point out something uh that i'm
sure you already know about but in the course i spelt cosmos db without the s like everywhere and i'm not going to go
back and fix that but i know i'm going to hear like never the end of it for like the next
year okay so uh let's start at the top here azure tables it's a key value data store can
be hosted on either azure storage account storage it is designed for a single region and single table can be
hosted on cosmos db and when it's hosted here it's designed for scale across multiple regions
cosmodb a fully managed nosql service that supports multiple nosql engines called apis why they didn't call them
engines i don't know coresql api this is the default one it's a document database you can use sql to
query documents and when people are talking about cosmodb that's what they're talking about the document
database the default one okay graph apis a graph database you can use uh
with gremlin to transfer traverse the nodes and edges mongodb api a mongodb database it is a document database
tables ai is just as your table's key value but within cosmodb apache tinker pop an open source framework uh to have
an agnostic way to talk to many graph databases they probably won't ask you about tinker pop on the exam gremlin
graph traversal language to traverse nodes and edges you definitely need to know what gremlin is and be used to
seeing what it is like identify what it looks like mongodb an open source document database and the way it works
is it has its own data structure its document structure called bson which is binary json
a storage and compute optimized version of json introduces new data types cosmo db explorer a web ui to view cosmos
databases and there you go [Music] hey this is andrew brown from exam pro
and we are taking a look at cosmos db so cosmos db is a service for fully managed nosql
databases that are designed to scale and be highly performant so cosmos db supports different kinds of nosql
database engines which you're going to interact via an api we'll cover them more than once here but we have our core
sql which is a document data store azure's cosmos db api for mongodb which is another kind of document data store
azure table which is the old cosmos db that still sticks around which is a key value data store we've got gremlin uh
which is a graph data store based on apache tinker pop that's why we were covering it earlier in this course here
uh and all of these nosql engines uh specify capacity so we have things like provision throughput so you pay for a
guaranteed amount of capacity or you have a serverless option so you pay for what you use so that's an option that
you can choose there let's just talk about some of the main advantages of using cosmos db
because azure really likes you uh to use cosmos db because they consider
like one of their flagship database products so it integrates with many azure services like azure functions
aks azure app services integrates with many different database apis like coresql mongodb as we saw cassandra as
we saw gremlin as we saw there's sdks for a variety of languages.net java python node.js not
ruby so i'm not sure if i'm going to be using anytime soon but there's a diff a lot of different sdks there uh it has a
schema less service that applies indexes to your data automatically resulting in fast queries you get a guarantee of 99
to the three nines of availability there we have data replication between azure
regions and this is all automatic uh data protect protected with encryption at rest and role-based access controls
rbacs auto it auto scales um it provides a way to handle variety of different workload
sizes so you can see that there's a lot of advantages of using cosmos db
[Music] so something that's very interesting about cosmos db api is that it's kind of
like an umbrella service for a bunch of different types of databases um and this is a lot this can be
confusing for someone that's coming from aws or gcp where like adabus has dynamodb and all it is is a document
database where um cosmos db it like supports a variety of different kinds and so when you first create your cosmos
db database you have to choose an api and so we'll just walk through the different types of apis here and just
give you some information around them so the first one and this is the default one is core sql api so it is a document
database but it's interesting because it allows you to use an sql or sql-like language and so
this is quite popular because one of the challenges of using document database is that normally you don't get
to use sql but with cosmos db you do and so that is something that's
really nice then you have azure table api this can be a bit confusing because when cosmos db uh
before it came out there was just azure table um and so azure table
uh basically is just a key value store and you access that through azure table
storages um like through storage accounts
but then they decided to make azure table to be more of a document database and to make it highly uh
resilient and highly available and highly redundant and so basically core sql api is like the version two of azure
table api but this one still exists because it's very cost effective so if you don't need a lot of redundancy and
you just need a key value store you need something very inexpensive you can use azure table api but even though you
don't really access it through cosmos db it still is part of the cosmos db product because it
was the first iteration or version of it you have mongodb api and so
um this is uh at it's not mongodb i don't think underneath but it's uh it's compatible
with mongodb so the idea is that if you need mongodb this is where you're going to
spin that up same thing with cassandra if you need cassandra you would spin that up as well
and this is going to use the cassandra queried language and so that is something that is there
then you have gremlin api so this would allow you to use a graph database you can see these are really really
different things and it is a bit confusing but uh that's just how azure likes to organize their
services so hopefully that clears that up for you okay [Music]
so let's talk about the core components for cosmos db because it is a bit confusing to try to understand the
relationship of all the components for a fully managed database it sure has a lot of moving parts
and so the first thing when you go over to the cosmos db portal
you're going to spin up a database and so the idea is that when you choose a database the idea is that this
is going to be a logical namespace for your collections your users your permissions it's going
to determine the api that you do choose uh once you've chosen your uh your database then the idea is
that you need containers and normally when you create a database it'll create a container for you and a container is
just a unit of compute but there is a lot of stuff that you can attach to that container information
when you create it and then you also have collections i find collections to be a bit confusing but they map the they
basically map to a container and this is the billable entity so this is the way that you're going to determine the cost
um and this is where you're going to be stating that information so when you create a container it'll have a bunch of
information below it um and that will be uh stuff that will be attached to the
collection entity so hopefully that kind of clears that stuff up
i do find containers a bit confusing because again when you create them it's at the same time as your database
but we will talk about that because we do we do come across that in the course okay
[Music] let's talk about how we're going to access our cosmos db database with
cosmos db explorer so it's a web interface to explore and interact with your cosmos db accounts accounts
database you know basically the same thing and this is at cosmos.azure.com so they have a
dedicated domain sub-domain for it so the idea is you type that in and you would
authenticate with your account um azure's always single sign-on so it's pretty easy and then the idea is that
you can access your database and interact with it without having to write
any any kind of uh sdk code or interact with cli you have kind of a gui to do that so example here is for the core sql
and you can interact with all the different types of database in here so it's not just coresql but there's a lot
that you can do with it and so the idea is you can create containers from here create new
databases um so a lot of similar things you can do from the the um
the cosmosd portal with an azure portal so there's some some stuff there it doesn't matter where you create it it's
just where you want to do that there there is an easier way to access this so instead of going to cosmos
azure.com the way i like to access it is that i'll just be in the azure portal for my database
and uh what i will do is access it under the data explorer tab notice here it says azure cosmos db account it gets
confusing because they say like an account can or like can contain multiple databases but
really it's just one database it's so confusing so you'll hear me interchange the term account and database
because the documentation does not clarify it but in the in the gui you can see it says the word account
there but yeah this is the way that i would recommend accessing that information
um you know you can also again access different databases so like for example
we can see this is for gremlin so here i'm creating a new vertex which is a graph graphing thing there and so that's
how we can do it there so it's not just for the core sql okay
let us talk about partitioning schemas in cosmos db because it's a very important concept when we're talking
about new sql databases because um you know it's all about scale and when you're dealing with databases of
scale you're dealing with partitions and this is where we have a lot of information about partition keys so the
data stored among partitions in amazon cosmos db indexes is grouped by the partition keys in order to improve
performance main concepts of partition schemas in azure's cause of cdb is partition keys so these are used these
keys are used to group items it works like primary keys if you know from traditional relational databases then
there's the idea of a logical partition this is a group of items that have all the same partition key values you have
physical partitions so this consists of a set of logical partitions
azure cosmos db manages logical partitions which can have one to many relationships we have the
idea of replica sets these are made up of a group of physical partitions that are materialized as a self-managed
dynamically loaded balanced group of replicas that span across multiple fault domains so i imagine when they say the
word physical they're probably saying like it's the actual physical underlying
machine and then these are the virtual logical partitions within those machines and then replica sets just means
duplicates of that kind of stuff so hopefully this diagram here will maybe help a bit here to make sense of
it there's a lot going on here but maybe we can break it down so each physical partition consists of a set of replicas
also known as replica sets so over here um we see partition set and then we have partition physical
partition um and you can see that it's it's
represented over here so confusing but i think this yeah this thing this thing here is the
physical partition okay over there
um then we have logical partitions so these are mapped to physical partitions that
are uh distributed globally so notice that it's these little boxes in here so it was what i said so this is the
physical machine and then these are the virtual partitions within them and they're distributed globally
partition sets in the image refer to a group of physical partitions that manage the same logical partition keys across
multiple regions so hopefully that makes things a bit more clear uh in practicality you probably won't need to
remember all of this for the exam you definitely don't need to know this but it's good to just kind of go through it
so you have an idea of these terms so there you go [Music]
all right let's talk about choosing a partition key this is super super super important to cosmos to be
so the idea is that a partition key is made up of two components the partition key path and the partition key
value so the idea is that let's consider a document or a key value that we need to try to
figure out what our prediction key path would be so we have user id andrew works for microsoft so in this case we're
trying to find something that is um like a unique identifier for that item and so
i would say a user id would be that thing there so in this case our partition key path would be forward
slash user id um so you know
that would be part one of it and so the partition key path accepts numerical or underscore characters you
could also use nested objects by using the standard path notation of forward slash so that helps there
um and so an example here would be forward slash id or we do forward slash user id
uh the second part of it is the value so the partition key value can be either a string or numeric types and so you can
see in our example it is a string so that's going to be accepted your partition key for all containers
should be a property that has a value which does not change you can't change the value of a property
if it's your partition key a property that should have a wide range of possible values that's why we said user
id because it's going to be pretty unique per
per person whereas works for you're going to have a bunch of groups of that like there might be a lot of people who
work for microsoft so it's not as unique um spread request units is something that's
important are used these are consumption and data storage that will be evenly across all the logical partitions this
ensures uh even ru consumption and storage distribution across your physical partitions so hopefully that
gives you kind of an idea of how to choose a partition key but now
we'll talk about unique keys [Music] so uniques provide developers with the
ability to add a layer of data integrity to their database by creating a unique key policy when a container is created
you ensure the uniqueness of one or more values per partition key and the unique key is scoped to a logical partition so
if you partition the container based on the zip code you end up with duplicated items in each logical partition so in
the example here you can see our partition key is going or sorry our unique key is based on the first name uh
the address address zip okay so two different values there so because if you had first name
there might be a bunch of andrews right and if you want to guarantee that there's only a single andrew you might
scope it based on a postal code or zip code you can't update an existing container to use a different different
unique key so you really do have to plan ahead to make sure you choose the right thing a unique key policy can have a
maximum of 16 path values path values being [Music]
i believe the separation of the comma a each unique key policy can have a maximum of 10 unique key constraints or
combination combinations so there's that
when a container has a unique key policy the ru's charged to the create update deletes an item are slightly higher and
we haven't talked about our use yet but we will get to it in the course here so just hold on your keys aren't case
sensitive so just consider that as well so there you go [Music]
all right now it's time to talk about cosmos db containers because well containers we don't have really any
compute to run these databases on and we do talk a bit about containers here so azure cosmos containers are useful for
scalability in azure cosmos db both in terms of storage and throughput they are beneficial when you need a different set
of configurations for each of your as azure cosmos dbs because they allow you to customize each
container individually so azure cosmos containers have some container specific properties and those properties which
can be system generated or user configurable vary according on the used api so the idea is when you first create
a database um you actually have to create a container with it so notice here we'll
create a new one and then we have a bunch of options here which that's what we're going to be talking
about is all these options around containers so the properties of contain cosmos
could db containers uh you know there are ones that are system defined properties and depending on
which api you use some properties might not be directly exposed so uh the example of those system defined
properties are here so you can see them underscored like underscore rid i'm sure etag underscore ts underscore self the
id and so notice here that these are all system generated this is user configurable and it's going to vary
based on these it looks like it's mostly just that sql api and then the ids on all of them there
um but you know just to read through this quickly we have unique identifier for the container the entity tag used
for optimistic concurrency control the last updated timestamp of the
of the container the addressable ui for the container the user defined unique name for the container and there's more
than these but these are the ones that i just picked out to show you here uh but next let's talk about capacity for
containers okay [Music] so let's talk about capacity for
containers but what is capacity capacity defines the amount of underlying resources that are available to support
consumption of resources such as compute and storage i made that sound much fancier than it is but it's just what is
available to you so like how much compute can i use how much storage can i use and cosmo cb has
two capacity modes this is no different than i mean similar to a dynamodb but there are other ones
called on-demand it's not called serverless but we have provision throughput and we have serverless and
you're going to choose between those two options let's talk about what is different here so for provision
throughput for each of your containers you provision some amount of throughput expressed in ru's request units per
seconds and so for workloads where you you can reliably predict the traffic uh or you have more flexibility of your
options that's where it makes sense to utilize it okay and for serverless this is where you can
run your database operations against your containers without having to provision any capacity this is great for
low or small workloads or for unpredictable spikes in traffics it's easy to configure but has some
limitations so provision throughput makes sense at scale
serverless makes sense when you are getting getting started or you're just finding that it's just easier to manage
but you're going to really have to decide between those two options which is what we're going to
look at here so just to kind of break down some additional options between these two
uh when we're talking about geo distribution for project for provision throughput you can run it in unlimited
regions where servers can only run in a single region the max storage per container is unlimited for provision
throughput but for service it's 50 gigabytes for performance we're looking at uh sub sub 10 millisecond latency for
re point reads and rights covered by slas service level agreements uh point reads uh is under uh sub 30
seconds for rights covered by slos so a little bit different there the building model is
very different so for provision throughput building is done on a per hour basis for the ru's provision
regardless of how many rus were consumed and serverless is billions done on a per hour basis for the amount of our use
consumed by your database operations so in theory serverless can be more cost effective but it really just depends on
what your consumption models look like we're not done with capacity we're going to look a little bit more at
throughput okay [Music] so there's more to the story for
provision throughput because you have uh different throughput strategies that you can
choose when you create your database container and we said earlier when you create a database you always have to
create a container with it and the two modes are dedicated and shared mode so for dedicated mode throughput is
exclusive to this container and backed by slas for shared throughput
or shared mode the throughput is shared across all your containers you cannot create throughput strategies after
creation so you have to choose wisely and to switch throughput strategies you basically just have to create a new
database and migrate that data over or sorry container and create migrate that data over
so when you're creating your database and your container there's this little checkbox for shared throughput across
containers that's how you know it's set for shared mode shared throughput is not available for
capacity mode of serverless because it's all about provision throughput which makes sense so when should you share
when should you not share so sharing the database level provision throughput among its containers
is um anonymous i don't know why i chose that word it's such a hard word but that means to compare to compare things uh to
hosting a database on a cluster of machines so because all containers within a database share the the
resources available machine you naturally do not get predictable performance or any specific container
but you get better utility or usage of your resources they're not going to waste so i think it's most like a cost
like a cost strategy kind of thing because if you have a bunch of containers they're not being used you
want it evenly distributed but yeah that's all there is about that feature okay
[Music] okay let's talk about reading data from cosmos db there are two different ways
to do this we have point reads and queries and this is a little bit confusing and i
honestly don't know what always cost points and what don't but impracticality is not that bad but
let's take a look here so point reads is a is a key value lookup on a single item
id and partition key and queries allows you to return multiple items so let's break it down here so the idea
is that when you are doing point reads you have a latency of about 10 milliseconds queries it just varies so
you know could be higher could be lower who knows uh in terms of the ru's that you're
charged and i know we haven't talked about our use but that is the next video so just hold on or come back to this one
to help materialize that information but for point reads it's always one request unit okay and so that is very
predictable for queries it's 2.3 are used at least and it's going to really vary based on what your query is the
number of items returned for a point read is one item queries is unlimited and the idea is that should you include
the partition key for point reads it's required for queries it's recommended so hopefully that gives you an idea but
those are your two options when reading and i think in the the follow along we will we'll look at these two types of
read options okay [Music] all right we keep mentioning request
units are used now it's time to actually describe what they are so the cost of all database operations is normalized by
azure cosmos db and is expressed by request units so ru's abstract away memory cpu iops iop standing for input
output per second and it's basically right sizing all these things for you mean like choosing
the right size with a single value based on data processed so database operations uh
include reads so the size of the item retrieve so one kilobyte would equal one ru 100 kilobytes would equal 10 ru's so
we're just looking at the calculations for read operations for ru's an insert uh is inserting one kilobyte
item without indexing cost so this is around 5.5 rus this is interesting because like dynamodb which is adabus's
product they have um they do they have reads and writes so they're separate but cosmos db just has read units
then there's updating so replacing item costs two times the charge required to insert the same item so there's some
math there i don't deletes cost i couldn't find it for queries it's the size of the items retrieved so it's
going to greatly vary because we saw earlier that you can have unlimited items retrieved so you're pretty darn
expensive to calculate capacity it is very very confusing so the idea is that there is a capacity
calculator and this is what you should use on the exam they're not going to ask you a bunch of math questions
i point that out because on aws exams for identity b you have to know the math for this not so much but
you have this sweet calculator that you can use to figure out the capacity that you'll need
okay [Music] so a very very important concept when
we're talking about fully managed databases especially nosql databases is consistency when am i going to see my
data what am i when is it going to be consistent across all the servers that it's replicating
data to so azure cosmos db provides five different consistency levels to maintain
data availability and creating performance depending on your requirements and so we have this nice
graph to kind of or graph um yeah graph to help visualize the five levels on the left hand side we
have high latency lower availability the worst read scalability and on the right hand side we have lower latency
a higher availability better read scalability so it's going to be based on what you need
and so we'll start with the first one which is strong consistency level so this is
linear realizability reads are guaranteed to return the most recent version of an item
so strong means a guarantee that what what you read is what you get okay but the latency is going to be higher so
it could be a bit slower for it to get back to you then you have boundless staleness so
this is a consistent prefix it reads reads leg behind writes by most k prefixes or t intervals so
just understand that it's a bit better than strong we have session so it has a consistent prefix it's uh monotonic
reads monotonic rights read your rights write right follows reads okay i know this might not make
sense but don't worry we're going to go through all of these still we're not done we'll go through the list and then
we'll go in more detail on all these you have consistent prefix so updates returns are are some prefix of all
updates with no gaps you have eventual so out of order read so
you might read something and depending on what partition it hits or what
a replica hits the data might not be all up to date and so the way you're going to set your
consistency is you're going to choose it here and there'll be additional options this is honestly really confusing again
i'm going to keep comparing aws aws dynamodb but it was done by db it's not it's complicated right or even
like a bus s3 where it's just like it's it's eventual or it's consistent um so you can see there's a lot of
options here but you know of that flexibility you have more choices i guess so let's talk about all these
again so strong consistency read operations ensure that the most recent data is returned read operations cost
the same as boundless staleness but more than session and eventual consistency write operations can only be read after
the data has been replicated by most replicas when they're talking about costs i i guess they mean like the cost
to retrieve information i'm not 100 sure again i didn't see on the exam about
costs with consistency levels but uh this is the language that is being used in the documentation so that's why i'm
bringing it over for boundless staleness read operations are like behind write operations due to time or version
differences read operation costs the same as strong consistency and more than session eventual consistency has the
highest level of consistency compared to session consistency consistent prefix consistency and eventual consistency
recommended for globally distributed applications with high availability and low latency so remember it's for
globally distributed applications we have sessions so read operations uh ensure that written data is consistent
with the same session consistency is scoped to a user session while other users may encounter dirty data if
another session has just written to it uh it's the default consistency consistency level used to uh when using
a newly created database reading costs are lower than boundless staleness and strong consistency but higher than
eventual consistency we have consistent prefix so read operations ensure that the most recent
data replicated among the replicas is returned but it does not guarantee the data is the most recent
uh the idea is we have dirty data occurs when one replica changes the data state but this data has not yet been
replicated they keep using the word dirty just meaning like when you replicate when you have that
piece of data and it's stored in multiple places redundantly some of them are dirty so
they're not all up to date has a stronger consistency level of eventual consistency but less than any
other then we have eventual read operations does not guarantee any consistency level lowest consistency
level low lowest latency and best performance among consistency levels so things are really fast least expensive
read operation cost compared to any other consistency levels so there you go there's all the levels um you know
hopefully that makes sense again it's very confusing but there you go [Music]
let's talk about a very useful feature in cosmos db known as change feed so change feed
is a service that monitors changes in all containers and distributes events triggered by those changes to multiple
consumers so change feed in azure cosmos db is a persistent record of changes to container in in the order that they
occur and the utility of this like this is not unique
to a cosmos db a lot of databases might have like extensions or modules dynamodb has something similar called
dynamodb streams but the idea is that you have a data source and you have an insert and update and the idea is that
that record goes into the change feed and then you can react to that chain to that record
and push it to other services and so the idea is just it's like a way of triggering something on an insert so you
say okay i insert this data um and so then send this data over to this other service
so it's just a way of triggering stuff okay azure has an sdk for net java python no js nothing for ruby they're
having lots of trouble with the ruby sdk at some point they'll fix it the change feed is supported for sql api
cassandra api db api gremlin api basically all the apis the exception of table api
so that's that in terms of its implementation the change feed processor is composed of four components we have
the monitor container this is where any insert or update executes the operations are
reflected in the change feed i can get my pen out here so if we can see where it is so the monitor
the container i guess collections are up here um we have lease containers see they're
saying collections because the collection um remember a collection is a mapping to a container so collection
container we're kind of just interchanging right here but it stores the state and coordinates
the change feed processor which is down here the host an application instance that
uses the change feed processor to listen for changes so that's the way it's it knows what to do
um the delegate the code that runs when an event in the change feed notification triggers it and so that's the code
within the consumer there the change feed processor may be hosted among azure services that support long-running tasks
such as azure web jobs azure virtual machines azure kubernetes services azure.net hosted services
so we could have looked at the code it's a bunch of c-sharp code that they have as
examples it's super not exciting um but you know i just want you to know about this feature
but again it's not unique to cosmos db but it is a common thing that you should know about
for these kind of fully managed databases okay [Music]
hey this is andrew brown from exam pro and we are looking at azure storage accounts which is used for containing
all your azure storage data objects such as blobs files queues tables and disks so storage accounts is one of those
services where it has a bunch of different storage types within it so it's a quite the multi-purpose service
and with each different type of storage it's going to have different features and their own pricing models let's just
quickly break down the type of storage we have so we have general purpose version 1
version 2 blob storage block blob storage and file storage and so i just want you to know that when you are using
storage accounts i'm saying storage type but for whatever reason the ui calls it account kind just
to be aware of that small little discrepancy there but storage accounts are going to vary
based on features and so the common features we'll see across storage will be
supported services so where can i put the storage account and so on the right hand side you can
see like if you make a general purpose version 2 what you have access to such
as containers cues tables or falsers those options are going to change based on what you're using all right
for performance tiers this is how fast you'll be able to do reads and writes you have standard and premium uh you
have different kinds of access to yours so how often do i need to quick access these files and then there's replication
how many redundant copies should be made and where and the last thing is deployment models so who should deploy
the supported service resource manager classic and in generally most cases it's going to be a resource manager
[Music] so here i have all the storage types on the left-hand side and we have the
feature set and how it's going to vary based on the certain types let's just quickly look through this to see where
there are some standouts and you might want to actually review this at the end of this section just so uh you're a bit
more familiar with all these types of features so it clicks a bit better but let's just quickly go through this so
you're gonna notice for version one this is the only case where you're gonna have a deployment model of classic everything
else is gonna be the resource manager from a practical standpoint you're not going to really notice because you're
just going to be pressing buttons but underneath that's the only case where it varies for
replication you're going to notice that version 2 has the most options with replication and if these don't make
sense don't worry we're going to cover all the replication in an upcoming slide for
blob block storage you can see it's very limited same with file storage so that's there when we're talking about
access tiers how quickly you can access files you're going to notice that it's only available for general version 2 and
blob storage where we're choosing these different tiers for these ones it doesn't really matter
um because well especially like a file storage because the drive is as fast as it's going to be right
for performance tiers you're going to notice that with version 1 and version 2 we have standard and premium when you're
using file storage and block blob storage you're always using premium and with blob storage which is again a
legacy format there that's going to be using standard uh blob storage comes in three different
types and you're going to notice that uh based on what you want to use there'll be some variation there
um i don't know where page is i think that they both support page but there are three types in there i wouldn't
really worry about it too much uh you're gonna notice that file storage only supports file types and then you have
version two and this pretty much uh supports everything so you can see general purpose version two is a really
great um storage to choose
[Music] so we were just talking about storage types now let's talk about the actual
storage services that we can actually launch or utilize within these storage types
under storage accounts and there are five core storages available to us the first is azure blob this is a massively
scalable object store for text and binary data it also includes big data analytics through data lake storage gen
2. um and so azure blob is really great because you just don't have to think
about the file system you just upload files and they're treated like objects so well that's really nice then you have
azure files and this is a file share and so the great thing about azure files is
that if you want to have a bunch of virtual machines and have it so like they have um the
same file system sharing all the same files that's what you're going to use that for then you have azure queues to
me this really is a database but for whatever reason it's under uh storage accounts and it's a nosql store for
schema-less storage of structured data this is another unusual one uh i don't know why azure puts it under here but
it's a messaging store for reliable messaging uh between application components to me that's like an
application integration service but azure categorizes it as storage and the last one is azure disk and this is block
level storage volumes for azure vms and so the idea is that when you want to do the top four the above you're going to
be launching storage accounts and for disks you're going to be launching disks it's a bit unusual
because um one of the storage accounts say that you can use or store disks uh in i think version
or uh general purpose version two i'm not really sure i understand that uh maybe it's talking about like backing
them up or something like that but anyway from practical standpoint and for what we need to know is that this is the
breakdown here and we do have a full section on azure disks so we will get into that
[Music] so now let's take a look at some of the features that are available on account
storage the first being performance tiers and generally this is going to be for blob storage and we have two types
of performance tiers we have standard and premium so it's as simple as uh just choosing between the two
and when we're talking about performance especially when we're talking about storage we want to be thinking about
iops and that stands for input output operations per second so the higher the iops the faster a drive can read and
write so you can uh definitely assume that premium is going to have a higher amount of iops and so when
we're looking at the premium performance these are going to be stored on solid state drives ssds which we have in the
picture there it's optimized for low latency higher throughput and the use cases here is going to be interactive
workloads analytics ai or ml and data transformation on the other side for standard performance
these are running on hard disk drives and you're going to have varied performance based on your access tier
and we're going to talk about access tiers very shortly but the tiers would be hot cool and
archive and this is great for backup and disaster recovery media content bulk data processing
and things like that and so the reason why ssds are generally really really good for
premium performance is because they have no moving parts within them and the data is distributed ram uh randomly so if you
have to do a read and write the distance for between the read and writes are going to be a lot faster
uh and that's generally why you're going to see solid sacrifice with premium performance or things with higher iops
and then for hard disk drives it does have moving parts so you see that it has an arm and
that arm needs to read and write data sequential sequentially to the disk and so it's very good at writing or
reading large amounts of data that is close together that is sequential but you know the idea is that neither uh
format is good or bad it's just the the use case that you need so you don't always need to go with ssd sometimes you
want to save money and hhd's are or hdds are really good for that [Music]
let's take a look at access tiers and there are three types of tiers for standard storage we have cool hot and
archive and so you're going to have this option between cool and hot and archived archive might
not show up depending on how you configure your storage account so for example if you use a particular
type of replication that might not be available to use with archives so just be aware if
that option doesn't show up you might have to change some of your settings let's quickly walk through the three
options so hot is for data that's accessed frequently and has the highest storage costs and lowest access costs so
again it's for data that is frequently accessed or in active use or data that's staged for processing and eventual
migration to the cool access tier then you have the cool tier this is data that's in frequently accessed and stored
for at least 30 days lower storage costs higher access cost and the use case here is for short-term
backup and disaster recovery data sets or older media content not viewed frequently anymore but is expected to be
available immediately when accessed and i think there's actually a third use case which is large data sets that need
to be stored cost effectively while more data is being gathered for future processing then we have the archive tier
this is for data that's rarely accessed and stored for at least 180 days it's the lowest storage cost but it also
has the highest access cost and so its use case would be long-term backups secondary backup archival data
sets original raw data that must be preserved even after it's been processed into its usable form
and compliance and archival data that needs to be stored for a long time and is hardly ever accessed so just make use
or note of the cool is at the least 30 days and the archive is at least 180 days i should have highlighted those for
you i don't know why i didn't and so just some other things you need to know so for account level tiering any
blob that doesn't have an explicitly assigned tier infers the tier from the storage account access to your settings
so you can set blobs at different storage levels blob level tearing you can upload a blob
to the tier of your choice and change uh change the tiers happen instantly with exception from moving out of archive
the concept of rehydrating this is when you're moving a blob out of archive into another tier and can take several hours
uh you have the blob lifecycle management so you can create rule-based policies or transition your data to
different tiers so after 30 days we can move it to cool storage and here is just the option here you see where it says
30. i think 30 is the minimum days you have to choose um but it could be wrong and so you have the options down below
so move to cool storage move to archive storage delete the blob and just a few other things here when a
blob is uploaded or moved to another tier it's it's charged at the new tiers rate immediately upon the tier chains so
when you're moving to a cold a cooler tier the operation is billed as a write operation to the destination tier and
when the right operation per 10 000 and data right per gigabyte charges for the destination tier applies
when moving to a hotter tier the operations billed as a read from the source tier and where the read operation
again in the 10k and the data retrieval again in gigabytes charges for the source tier applies and early detection
charges for any blob moved out of the cool or archived tier may apply as well and just lastly here uh when we're
talking about cool and archive early detection any blob that has moved into the cool tier so this is only for
general purpose version two accounts is subject to a cool early detection period of 30 days and any blob that is moved
into the archive tier is subject to an archive early detection of 180 days and this charge is pro-rated so
you know access tiers aren't the funnest thing to talk about but there is all the information you need to know
[Music] let's take a closer look at az copy and this is a command line utility that you
can use to copy blobber files to or from a storage account so the first thing you're going to need is the executable
file and there you can see they have it for windows linux and mac and then once you download that file what you're going
to have to do is also make sure that you have the right level of authorization with for the user account they're using
with it so you might need to for download you'll need storage blob data reader for upload storage blob data
contributor and storage blob data owner so just be aware that you need to have those uh rules available to you
to your user account i think that i have access to everything so i don't think i even i set this it just works but if
you're in a larger company and you have more permissive least permissive roles you just need to know about that
and so you can gain access via either the azure active directory or a shared access signature so let's just
take a look at that right now so the idea here is that we'll type in a z copy login and this is going to ask us
whether we want to sign into the web browser so that is uh that should be azure active directory option one and so
what you'll do is you'll enter your username and password and you'll then have to enter the code displayed there
and so now you're ready to use uh the um the cli and so all you have to do is type in az copy copy and then you have
the file and then you give it the endpoint to the storage account uh and the container and the location you want
it to go in if you want to download files it's the same command you just reverse the order you say uh this is the
the location of the file and want to download it locally all right
hey this is andrew brown from exam pro and we're taking a look at lifecycle management which offers rule-based
policies that you can use to transition blob data to the appropriate access tiers or to expire data at the end of
data lifecycle so what you can do here is transition blobs from cool to hot immediately when they
are accessing access to optimize for performance transition blob blob versions and blob snapshots to a cooler
storage tier if the objects have not been accessed or modified for a period time to optimize for costs
delete blob blob versions and blob snapshots at the end of their life cycles to find rules to be run once per
day at the storage account level apply rules to containers or a subset of blobs using prefixes or blob index tags as
filters so here's an example of adding a rule and one few things here you might want to read here is the fact that you
can apply the rules to all your blob or all blobs in your storage account or limit based on a filter you can choose
the blob type or the blob subtype and then there's the actual rules themselves so here you can see if last
modified by one day then delete the blob if last modified in the two days then move to cool storage so it gives you
kind of an idea of things you can do there okay [Music]
in some cases you'll want to be able to retrieve properties and metadata or set meta data for storage accounts so let's
take a look at some of the ways we do that with the cli the first is for container properties and so
the idea here is we have azure container containers show so here's an example where it is showing container show
information so that's the property information here and then let's say we want to get we
want to update that data then we would just use the update command and then the idea here is we can pass
along data that we want to change then there's also the idea of showing so here we're just showing that metadata on
that container i have a little bit more here i suppose so we can also
update metadata on a blob so very similar process but that's for a blob and as you can imagine there is a show
command as well so there you go [Music] hey this is andrew brown from exam pro
and this follow along we're going to learn a bit more about storage accounts and how to copy uh content back and
forth using az copy so what i want you to do is go all the way to the top here and we'll type in storage accounts
and then from here we're going to create ourselves a new storage account and we'll make a new resource group i'm
going to call this one fajo we'll type in fajo here and we can have it in any location that
we so desire we're going to stick with standard here general purpose 2 seems okay to me we'll go ahead and hit review
and create and then we'll hit create there and i'll see you back here in a moment
all right so now that our storage account is ready here we'll go in here and we'll create ourselves a container
so i'm going to call this one kievas faggo we're going to leave it as private because we technically don't want anyone
to have public access to it and so there is a tool that you should definitely know how to use
called a z copy so if i just type that in there quickly here and we go and scroll down here are some versions we
can download so if you're on mac linux windows or etc we can use it i want to try to give you
a consistent experience on how you can utilize this so i'm going to just try and stumble my
throat way through and use it via the cloud shell because i think that would probably be the most consistent way for
all of us to use it so make sure you are in bash and we're going to need two things you're going to need this az copy
tar file and we're also going to need some kind of file to upload okay so
the question is how are we going to get things into our cloud shell well the great thing about cloud shells when you
launch it it asks you to create a storage account and we should probably be able to place those files into there
so if we go back to storage accounts at the top here and uh we go here this is a storage
account so if we go in here if there were containers we go back but i believe what they do is they set you actually up
a file share here right and so in here it this is what it's doing so what i'm going to do is upload
two files so i'll just go here and one is going to be this which is the kivas fajo which is
just an image okay and what we can do is i will download
this linux here and once that is downloaded what i'm going to do is also
upload that okay so i just drag that to on to my desktop it's a bit easier to access
i might just rename it so it's a bit easier to work with so i'm just going to rename the uh the
tars gz to a copy here if you can see and uh we'll go back here and we'll just go ahead and upload that okay
so we will upload that as well and it's only 11 megabytes so it shouldn't take too long here
but what we need to do is find out where these files are so what we'll do is make our way back to our shell i'm going to
type in ls and i think it might be mounted here so we'll do ls and there's our file so
um what we'll have to do is untie the az copy so we get the binary so i have a handy dandy command over
here from i just found one over here because i can never remember this command the tar
command so we'll type in tar hyphen x v z f
and then the name of the tar so i'm going to start typing a z and then hit tab to auto complete
and that should uh unturr it and so now we have our oh we made it into a folder so i'm going
to go into that folder there and so now we have our easy copy here okay
um so just to make our lives a bit easier i'm going to want to move that easy copy back one directory so i'm
going to type in mv for move i'm going to type in az copy and i'm going to do period period backslash so it says go or
forward slash you'll go back a directory if we cd and do an ls just do clean again we'll do ls
we can now see that it's a bit easier to work with i'm going to delete the that long folder there because we don't
need it when it's a folder you got to do a recursive so hyphen r
and we'll also go ahead and delete the azcopy.tar file okay
and we'll do uh clear and then we'll do an ls hyphen la that'll list everything because what i'm
trying to see is whether this file is executable appears that uh we can execute it if we can't it's usually a
good habit to do uh chamod u plus x a z copy here and then if we run this again
um yeah it looks the same so i think we'll be in okay shape here but uh yeah now we're all set up to um go ahead and
use the az copy command so let's give it a go all right so there's two ways we can get
um or authenticate with easy copy and if you remember from my handy dandy az copy
slide we have two options which is we can use this login or use the shared access
signature we're going to give both a try but we'll start out with azlogin okay so we'll do type in az copy
login and what that will do is it will prompt us and ask us to sign into the
browser and enter this code so i'm just going to grab this link here and we will copy it
i'll make a new tab paste it in and once here we'll go ahead and enter in this code okay
and we'll just say yep andrew brown and so now that is now synced so now we should be able to use the az copy
command uh in order to use this again we pull up my reference here we have a z copy copy the name of the file and then
the name of the path we want wanted to go to okay so what we'll have to do is uh in our
container here it's called key vas fajo so we'll have to remember that but if we go here we're looking for that publicly
accessible url or that fully qualified domain url um i would have swore it was
here um oh sorry we're in the wrong storage
account by the way we have to go into our actual faggio one here just in case
just making sure we're in the right place but let me just go find that url i can't remember if we have to assemble it
by hand or if they just have it handy here so just give me a moment all right i'm back and i had to pull up
the docs for this one which is not a big deal um but the structure here is we're going to
have https the account blob or windows.net the name of the container and the path to the blob okay
so that's what we're going to do here so let's give it a go and see if we can get it to work the first time around here so
we are typing well before we do that i'm going to type clear and then do ls so we're going to type in a z copy
and the command is cp or actually you can type in copy if you want it looks like cp is abbreviation
and so the path to the file and so in our case it's going to just be local so i just typed in k and i tabbed and the
next thing we have to type is https colon forward slash forward slash the name of the account so it's fajo
and then blob dot core dot windows
dot net forward slash and then we named the uh container here kivas fajo
and then we'll type a forward slash and then we can name the file however we want so i'm going to name it the same as
the jpeg okay and so hopefully this just works and
i mean it looks like it worked pending says the file transfer might have failed here hold on
so it says we're not performed to other we don't have access to perform that there
so just give me a moment okay all right so i figured it out um there was one key thing we forgot to do so
the reason we're getting a 403 was as plain as it was is we didn't have access to do it so if we just go back to our
storage account there um and this is within the container so i'm
just going to go all the way back to storage accounts we'll go into fajo here and so on the left hand side you see it
says access controls right so if we were to go into here and we go to role assignments we can see
what what role that people have but in order to access blob storage you need to like to be able
to upload and download you either have to be the storage blob data owner or the storage blob data contributor so if you
want to do that you just have to go up to add and do a role assignment and sign yourself this role
it does take about five minutes to take effect so even after doing at doing this adding it to my user here i wasn't able
to do it right away but after waiting a little bit here i was able to go and do a test like with list
and it didn't print anything but that means that there is no error right so we probably do have access another
little thing i found out which was nice is that if you go let's say into a container here
um and then you just clear that out we'll click into that container and on properties here it actually gives us the
url so it makes our lives a little bit easier so we have to type that whole thing out and try to figure it out but
let's go back and try to actually do a copy now so that's what we've been trying to do this entire time so what
i'm going to do is type in az copy and then we want uh kivasvagio
and then i'm going to paste in the url that i just got from there and
[Music] i'm just going to put a forward slash here i'm going to type in kivas
faggot.jpg um oh you know what we got to put the word
copy or cp in front of it and it looks like it completed that looks good to me and so if we make our
way back here to um to our container we can see that it's
uploaded so there we go we were able to upload it with um active directory so now let's go give it a try with a sas
okay all right so let's give this another go here and this time what we're going to
do is use an sas so it's as simple as attaching it to the end of the url for the the destination
or the source depending on what we want to do but what i'm going to do is i'm going to
go ahead and delete this file here just because we already have it and what i'm going to do is go back to my storage
account and what we need is a sas here so i'll type it in here i wonder if we can type
it the oh yeah we can here that's great so type it here at the shared signature access and we'll say what do we need
access to well we're only using blob storage so we get rid of file queue and table
allowed resource types will be for container because that's what we're doing
uh and we aren't deleting we don't necessarily need a list but we can have that there read write add create
that seems okay to me enable deletion of versions um i mean we don't have to turn that off um
but we'll just leave it in place i think we will do http or https just
have some flexibility here and this looks okay we have a few different keys we're gonna stick with
key one and so i'm gonna go ahead and generate out an sas connection string
and so this is the token we want see how it has a question mark it's going to help allow us it's going to be creasing
allow us to put that on the end of our url so what i will do
is i will i don't know if we can write a z copy log out let's see if we can do that
oh we can great so just making sure that it's not authenticating that way and so what i'll do is i'll go back up
to uh this link here and this should fail right because we don't have access right and so what we should be able to
do is just take this string and place this on the end as such and hopefully just works
and it says done done done so if we go back to our storage here to see if that is uh
working as expected we go into that container um
[Music] let's just move this down below here it gets a bit hard to see when you have
that open like that over in storage account right now okay so we're going to type in container go
here click into uh
devos fagio and we don't see the file so let me give it let me give a try here okay
all right let's give it another go here um you know i just don't necessarily trust
url actually it says fail to perform copy no sas token or oauth provided which is fine but in this case we did
and we saw it done done done and so here are the values it's printing out so maybe what we should do is just
regenerate this out and be a little bit more uh permissive um and maybe that will give us an easier
time because we want read oh this is slightly different a for the container not sure why this looks different all of
a sudden but what we'll do is we'll go back here we'll just go close a bit we'll go back
into fagio i'm going to go look for the container again and what we'll do is click into the
container and then from here we'll do shared access properties um
we'll do account key which is fine so key one i want i'm just going to put everything
on here to make our lives a bit easier the start date is fine the end date is fine
will allow http or https and we'll generate that token so here we have it again
so we'll do is we'll just grab this whole thing here actually we probably yeah we'll just
grab the token um and then what we'll do is make our way back to
here i'm going to hit up and what i'm going to try to do is
remove the end here and give this another go okay so paste that in
i don't see the question mark on that so this is not uh this is not an easy one okay
we'll put question mark here hit enter it just says stopped done done done done so it makes you think that it's working
clearly is not um let's just double check the container to see what's there okay
no file still well what if we uh grab this one here maybe it's an encoding issue or maybe it
doesn't like the way we do the link hit enter still says stopped
i don't know all right i'm back so you know what i thought i was doing everything right and
so just as a sanity check what i did was i opened up my command prompt and i installed a just a different
version of azcopy it's the windows version and using the exact same command it worked so you know i just want you to
know like this is kind of something i run into a lot with azure whereas if you try it one way it doesn't
work but some other place it works no problem just because azure tries to support all use cases
sometimes you'll be spending a lot of time uh second guessing yourself but definitely the command i showed you was
correct um but uh you know it could be that the um daisy copy version is out of date so remember when we wrote an az
copy here for linux it was suggesting to get a newer version but i had downloaded the latest version so
you know it could be that um it's really hard to say uh but i want you to know like that's just how you do an easy copy
um and you know don't get frustrated if you get stuck on stuff like this okay so we're all done here and let's go ahead
and tear down our stuff and uh we'll just go to fajo here that's the one we set up there and we'll go
ahead and delete that resource group and we're all done here okay [Music]
hey this is andrew brown and this follow along we're going to learn how to work with azure's sdk so we're going to work
with a variety of different languages and we're going to interact um i suppose with a storage account so what i want
you to do is make your way over to your portal here and type in storage accounts across the top and we will create
ourselves a new repository here or sorry a storage account and we'll call this one my sdk playground
and we will say ok and i'll just type the same thing by sdk playground you might have to put some numbers on the
end there because these are fully qualified domains yes east us is totally fine standard is fine we'll go ahead
ahead and hit review and create so we'll give it a moment there then we'll go hit create
and we just have to wait about 30 seconds for this to finish while this is going what we can do is
make our way over to github because we're going to need a repository to play around in so i'm going to create a new
repository and we will choose whoops i don't want to template but i want to go to my own
and this will be my sdk azure playground putting the word azure in there so i know what this is later on we'll say add
readme we're not going to choose a git ignore file because we'll have to vary it based on a couple different folders
and you should have get pod installed or if you don't you can use your local machine it's up to you
but it's a lot easier to use git pod here it's free free to use so no reason not to to use getpod all you're doing is
attaching this in the front of your url if you want the button you can go get the chrome extension i'm going to go
ahead and hit get pod and spin up that environment going back to our resource or storage account you can see it's
deployed so we'll go to that resource and what we'll need here is a single container so that is something that we
will grab here right away and then um so let's just say container one i'll make it private i'll just make it
one with no hyphen because i think that's how i wrote the instructions and so in here we will need to upload
some kind of file so i'm just going to upload a data pipe and data stair upload any files you like but you'll
have to change the names accordingly what we'll need to do is make our way over to access controls i'm actually
going to do this at the storage level so we'll go back up a layer here go back to that resource we'll go to
access keys and we are going to need our connection string oh sorry shared
access signatures well we'll figure it out here in a moment because we do need to i thought
it was under access keys that it shows um oh it's right here connection string okay so it is right here we'll be in
good shape so what we'll do is go back to our git pod and we're gonna make uh three folders we're gonna have ruby
we're gonna have python and we're going to have uh javascript so we'll just say yeah js
for javascript and in each of those we're going to create a new file so so say gem file
uh this will be the actually i want to generate that out so we'll uh cd into that directory and
i'll say bundle um bundle init so that will create a way
for us to manage packages in ruby for uh for python we'll create a new one called um
uh requirements.txt and then for uh javascript
this will be npm init and so these are the three ways well i should have done hyphen y on this here
these will be the three ways that we bring in our external libraries of the azure sdk into each of these programming
environments so first for ruby we'll go through that one so for this one we need to install a gem
called azure storage storage blob
so that will be for that one for python it is called i'm just trying to find the name here
it is azure storage blob and we can give it a version here we could also take the version out it
probably should still work without the version there and then for our package.json we'll just
install that there as well so i'm just going to cd into um python because it's easier to install
python packet or sorry yeah not python packages but javascript packages from the cli here so i'm just looking
for what it's called so it's mpm install at sign azure storage blob
all right so this one's installed we go to package.json we can see it's required uh the gem file in order to install in
ruby we have to do bundle install and the reason i'm doing them all in parallels i'm trying to show you there's
a pattern right you're installing the package you're basically doing the same thing over and over it's just that the
syntax is slightly different for python is going to be python or sorry pip
pip install hyphen r requirements and i generally remember all these
things you know what i mean like i might not remember all the details but i know that the patterns are the same and that
makes it a lot easier and we call this cloud programming because you don't need to know
everything in order to do this so the first one that we're going to set up is the ruby file so we'll make a new
file here and we'll just say main.rb which is an okay name here and we'll get coding with this one first
so what we'll need to do is require and we'll do azure storage blob and here we want the account name
and we want to have the account key so we'll go back over to here and i'm going to go ahead and grab the first key
so we'll say show and i don't want to put an action here i want to put it down in my environment
variable so i'm going to say export account key here equals
and then i'm just going to do double quotations and paste it in and hit enter and i'm going to do gp that's for git
pod environments account key i'm just setting it twice so that if we have to restart this environment that
these keys will persist okay so um
what we'll do here is just put in um env parentheses and this will be account key
so that'd be the way we um import it into ruby because that's how you do it well this one's account key this is
account name so let's say account name and then here will be account key and if we were to write puts account
name puts account key that should print those out
so we can see if they work so we'll do bundle exect ruby
main so this is a pretty common way and i'm doing in the wrong folder we got to make sure we're in the ruby folder
before we do that so this is a common way in ruby to run things in the context of the gem file
you'll see that's kind of a bit of a trend for all these different uh programming environments so we'll go
ahead and hit enter and it should just print out the keys it printed out the first one but not the second one we
didn't set the count a name yet so this is the account name for it so we'll do export account name
the reason we're doing export is that it actually loads the environment variable so if we go here and say grep
account you can see it's actually loaded into our developer environment and then getpod environment is going to persist
if we restart this environment so there's that one and then we want to do it
for get pod there we go so they're both set if i just double check here and we do
this they both show up so that is good and so the next thing we want to do is
make a client so we want to establish a connection so i'm going to do blob client equals azure
storage blob blob blob
service dot create if you're wondering like where do you find these things you just
like search on google you say azure azure blob storage ruby like and maybe documentation and you
know through there you can kind of find information there's probably like a documentation page
if we go here um i know there's one
maybe here and then you scroll down blobs
so there's all sorts of way yeah this is pretty similar code here so there's all sorts of way to find it you just have to
kind of search around and eventually you acquire that knowledge so we need the storage
account name and then we need the storage access key so this will be the
account name and this will be the account key and then down below we'll just do blob client we'll make
sure that works so we'll go back and oops hit up on the bundled exec here and so it prints out some kind of object
so we know that is now uh at least the the client's being created so now let's go ahead and actually print
something out so we'll go blob client list containers parentheses each
do container and so just kind of doing iteration here and just do container name
and if we hit up it will print out container one so that container now does exist let's say we
wanted to go create a container that's something we can do as well from here so we'll say blob client
create container i'll just say sum container name
here and we'll do it again and if we go over here and take a look at our containers we should see
a second container called some container names that one is working fine let's say we want to print out the
content so we'll do blob client list containers actually we already did this up here i'm not going to print it out
twice we'll just do it in the interior here might as well just print out the contents of all the blobs we're going to
see what is actually in the containers say container name dot each do
blob and and then we'll do puts do interpolation blob
name then we hit up uh list blobs probably that's probably
what it's supposed to be here you can see it's printing out the names here so that's working out fine uh it
had a http error i don't know if it actually matters we'll do it again oh you know what it's because we already
created this one so we'll just comment this out that was just like a one off there
but you can see so it's not gonna create the same folder that's printing the contents of
container one there's nothing in container two or some container so that is pretty straightforward there
let's go take a look at python so we'll make a new file here we'll call it main.pi
normally like if you want to isolate your libraries you create a python environment i didn't do that um i don't
really always remember how to do it because i don't work in python that often but i know it exists but that's
the idea behind us doing the bundle exec or if we do mpm start which you'll see here is that it's in the context of our
isolated libraries that we want to use uh but anyway we'll get started with python here so
very similar process we're gonna do azure storage blob base blob service import base blob
service a little bit more verbose i'm not a big fan of um i'm not a big fan of python i need to
figure out how to get environment variables i'll be back in a second okay so to get environment variables is
just import os and then from here we can get uh os
and viron squares all right we could just do get i suppose
okay it's probably nicer and we can do account name here see the process a little bit different
account key instead of puts it will be print so account name
[Music] account key starting to look familiar right
just slightly different and we'll have to be in the right folder here so we'll make our way over to python this one's
going to be python main we don't have to uh
you know do it in the context stuff because it's already there no module name azure
storage blob blob service line i don't know if we did an install i thought we did so we'll do
it again requirements.txt i mean it's totally possible i typed something wrong
i'm just double checking azure storage blob base blob
service looks correct to me we'll try this again i'm just going to do a quick search here
just to see if i typed it wrong just in case it's a versioning issue because we did have a version in there
2.1.0 i just don't want this to mess up so i'm going to go here and go to 2.1.0
and we'll try this again because you never know if these requirements ever change
i think it wants double equals yeah it does and we'll give this another go okay so
now it's working so the versioning does matter we're not on the latest version but honestly it doesn't really matter
because you know it doesn't
so now that we have those two printing out we can go ahead and make our client or i guess service in this case the
wording's a little bit different each one i don't know why but it's just how they do it
so here we will have account name [Music] equals account name
account key equals account key and we will print out the blob service as similar to before
we will go up oh sorry i wanted to do uh python main here
and so that prints out no problem now we'll need to iterate through this stuff so we'll say containers equals
blob service list containers parentheses for container in containers
and this is indent based so there's no like closing statement we just double like this
container.name i wasn't trying to run it i think it just ran there below by accident just
type exit down below here we will hit um go back to
thinks i'm in a shell i'm not in a shell okay so we'll say python main.rb again
oh i'm back in the main folder so we'll say python main.rb and it says can't open file or directory
python i do oh it's right there sorry pi and uh now it's saying these don't exist
so let me just do i think because we entered an
environment i don't know what happened here but something happened where see ran this
and it put us into the python environment shell and that wasn't something i intended to do so what i'm
going to do oh i'm going to have to update our git pod yaml file here i'm just going to add
a or ignore i'm just going to commit the stuff to reset the environment because i really don't want to be in a a
pi environment like that it was just kind of a mistake and
hit add dot get ignore come on let me add this darn file i'm just trying to commit this single
file here get add.get ignore so then when we go over here
it's a bit more manageable so i'm just going to commit all the code here um
ruby and some python and what i'll do is i'll just close this environment
and then reopen it and those environment variables should still be set because we did the gitpod
enamel or getpod environment so i'll just give it a moment here to
respin up the environment all right so our environment is back up so what i'm going to do is make our way
back into the python directory and we'll do a pip
install our requirements and so hopefully this time we don't get
that weird problem we had before here i really don't have to restart
notice it's printing the container name so pretty similar to what we did before
and so what we'll need to do now is just print out the actual objects like we did with the ruby script so we'll go back
here to the main scroll on down and we'll do blobs
equals blob service list blobs parentheses container name
for blob in blobs we'll say print
f i think it's just single quotations i always want to do uh doubles there but
it's a single and we'll do um curly's like that
blob name so you saw this in the ruby one but it was like it said it was a pound and double quotation so this is
called interpolations where you inject a variable into a string each language does it a little bit different so let's
take a look here at this point one prints out the object same thing so very similar let's go take a look at
javascript so that is our next one here so we'll go over here we'll make a new file we'll call it main.js
we will need to update our package.json so we have a way of calling the script so we'll just call start here and we'll
just say nodemain.js and then we will go up here and we will load the stuff in so we'll
say uh constant curlies blob service client storage shared key credentials
credential you got to spell it right or it will mess up for sure
equals require at sign double single quotations azure storage blob
and we'll do constant account name constant account key
and there will be this one and then there will be this one so um
how do we load environment variables in javascript i'll be back in two seconds actually i'm surprised i don't remember
because i did it in so many other followings it's actually just process it's a problem when you move from a lot
of languages you have to kind of look things up multiple times but it's okay because as long as you know what you
need to look up env dot account key so those are our two that we're going to
load here we need to create a credential key so we'll say key credential equals new
storage shared access credential parentheses account name account key and then down below we'll
create a blob client equals new blob service client it's interesting like how they're all slightly named difference
like in python it's just uh service ruby's just client and then in in javascript it is
service client so we'll do a url so this is when i always say like things are fully qualified domain name that's
that's what i mean is like you can literally call us services like storage
based on a fully qualified domain so we'll do blob core
windows.net and we'll go down below here and say key credential
here and so we'll do console log blob client i don't know if we'll get
anything intelligible out of this but we'll give it a go i think like as long as we don't get an error it should be
fine because i don't know if it would return anything in javascript we'll find out
here in a second so we'll make our way over to node.js or javascript we'll do an npm install
npm install and we must have a syntax error here there's a comma missing here
oops i didn't want to build lines it's just being silly we'll go back here to npm install so it's we have to do it
because i restarted the environment because those did not persist and
what we'll do is do npm start oh we are getting data so there are no errors which is good
and we're getting a object back so this looks good these errors are all errors
we'll go back to our main js and we will do what we've been doing which is iterating through stuff so the
first thing we'll need to do is create a new function we're going to call this async
function container inventory i really really don't like having to do asynchronous stuff when i
don't have to but that's like a
common feature i mean other languages have it but that's something we do often in
node.js or javascript so we'll say 408 parentheses constant container in containers
curlies console.log container name
container dot name and then we will call this function
so hopefully this works npm start we we don't need this anymore here it's
gonna get our way so i'm gonna take it out uh it doesn't like the end it's supposed
to be of if it's not in change of we'll iterate through here and we get
our two containers now we need to go get the blobs within it
so in here what we'll do i'll just think about this for a second we need to
do constant container client equals blob client get container client parentheses container name
and we'll say let blobs equals container client
list blob flat parentheses container name we'll say 408
const blob of blobs console log parenthesis back ticks
is it back ticks yeah it's back ticks dollar sign curlies again this is how you know each one has different
interpolation right so like python it was single string you had to put this f in
the front to do interpolation ruby was just double quotations with a pound curly so they're all just slightly
different and hopefully this works no uh container is not defined so [Music]
what line is it complaining on it's complaining on the list blob flat right here so i must have spelt it wrong
we'll go ahead and paste that in there it didn't paste right we'll just retype it container
try it again there we go and so that's all three of them so that's all i really wanted you
to do just go through the motions of using the sdk
with some language and just to see that there are some patterns to languages so it's not as super hard as you imagine
we'll close that it doesn't really matter if we leave it around or we don't delete it but for the resource
group here with the storage account we will go ahead and delete i'll go in here we'll go into my sdk
play oh and we'll click on the resource group here lots of ways to get to the resource
group we'll go ahead and delete our resource group and i'll also delete the storage account so
there you go [Music] hey this is andrew brown from exam pro
and this fall along we are going to work with blob storage object metadata and specifically with powershell in order to
set it and view that it's working so what we're going to need is um an environment in order to run powershell
so we're going to do it in cloud shell we'll expand that and you'll have to be on powershell mode if you this is the
first time you're running cloud shell ever you might have to accept for a storage account so it might
say like create a storage account for this you just say yes notice here that it's running uh install
module for you for azure so you know if you're doing this on your local machine and you can do this if you
have windows installed so the idea is that you would open up powershell so i type in powershell
all right click and run administrator remote if you're on a mac you can install powershell
on a mac but to be honest this is so much easier if we just use cloud shell but i just want to show you a couple
things so if you were to install um azure you do install module hyphen name
az and then this thing takes forever like i swear to you it takes forever and
that's why i don't really want to do it this way just bump up the font here to 24.
and so we just say yes to all and we'll let that install in the background
but i just want you to know that that's how you install all the add azure command commandlets
because it's very useful to know and we do it a couple times i actually do do it in one and we have to wait and
it takes forever but while that's installing the background we're gonna assume that we
did connect a z and install module and now we are just getting some stuff so it's a good habit to whenever you're
using type of clear hair whenever using a powershell with azure to make sure that you are on the correct subscription
um because it can be in the wrong place so even though i know this is the one we're on i'm just going to set it again
so we get the habit of doing it so it's set az context
here and we'll do subscription [Music] and we'll do azure sub
description one and so that will make sure that that is explicitly set now we want to
get access to the resource group and of course we could easily do this through
the console but you know i wanted to learn some powershell so i'm going to make a new tab here and what we'll do
is make our way over to storage accounts because we're going to need a new storage account for this
we'll go ahead and hit create and we will make a new resource group this one's going to be called
my blob metadata and we'll say blob metadata 88 whatever it doesn't matter as long as
it's not used we'll hit review create and we will create it here
and we'll give it a moment here so we can go to the resource group so there are a few things we will have to type in
that thing will be done in a moment so we'll need um just type clear here it always messes up i'm going to expand
this because we really don't need anything in the background so we want dollar sign resource group
equals and then whatever we called it so my metadata blob i think or my blob metadata
my blob metadata and we'll hit enter whoops that is not what i want i want an a on the end there
and then we need the storage account name so here it will be called
go to the resource that is the name right there all we're doing is setting variables
here right now and it's not copying properly because you have to right click you can't
use hotkeys there why i don't know uh we'll do storage key equals and the idea is we are going
to fetch our storage keys we'll do a z storage account key
resource group name dollar signed resource group and then name
dollar sign storage account i just noticed that i spelled
that wrong i'm just going to hit enter because i know that's not going to work i just want to take this c and make it
smaller go back here and do storage account hit enter and so that will create that
then we need our context and we'll say new a z storage
context storage account name dollar sign storage account storage
account key dollar sign storage key and it doesn't like something because we need to actually get the value because
if we type in storage here and by the way i didn't type this right again
storage key but if we do storage key and just see what was returned notice that there's two keys so we'd
have to do something like zero dot value here so we'll go here and say
square zero dot value and it says cannot provide parameter for
name um [Music]
oh you know what that doesn't make sense it's not it's not name sorry it's because i'm going back to the wrong one
here so this is this one and we really wanted the context just have to be careful what we're doing
here there we go so now we have a context so it's just telling us
what we're kind of bound to and so now we can do get a z storage container
hyphen contacts dollar sign context if you're wondering like where do you get all these like all of these um commands
like once you find one you find them all pretty much see here there's like tons tons of them
you can read through them and they have tons of examples so you can work your way through and figure it out
uh so i was expecting [Music] i guess there are no containers so that
kind of makes sense so we'll go over to here and we'll say container one keep it all lower case to make our lives
easier and we'll go back over here and we will hit up again you can see there's a
container i think we can type in container and then do container one to specify even greater context there
except i just have to put a capital c on here maybe oh lower case on this one i think all these parameters don't matter
they can all be lowercase so those don't matter so we now have a reference to a
container so now what we need to do is um
reference a blob so what i'm going to do is type in blob
and i think we'll have to upload something first but i'm just going to type something for fun here
uh and so i'd probably do like an image i have an image on our desktop that i've been using for a while so data pipe
web p is probably what it is and we'll do container it isn't there
yet but we'll see what happens if we do it and there's no image or uh thing uploaded we'll do dollar sign context
and so it says could not recognize that command this happens a lot but it always has to be like something like a verb or
an action and then whatever so it's just saying the file's not there we know that's the case we'll
go into our container and we'll say upload we will then go and upload data pipe it's actually a png
which is a lot easier on us we'll go back over here again you'll have to upload your own file it'll have its own
name so you got to go find an image yourself we'll hit enter
and so if we now type in blob we have a reference to that blob so let's get some information about the
blob so we'll say blob dot blob client get properties dot value
and so we have a lot of information about the blob notice that there is no metadata right here um i think we can
just get the metadata if we type in metadata i don't think there is any metadata so
it's not returning anything i'm just going to make sure i'm typing it right i'll type something i know is there like
etag yeah so it's just there's nothing set so what we'll do is make some metadata
so we'll do metadata equals new object system dot collections.generic.dictionary
double quotation squares string string so we are creating a new object that expects a dictionary with a string
string hopefully that makes sense it's because we have to add a key and a value
right because that's how tags mostly work even for metadata so we'll go here and say author
exam pro and we'll say dollar sign metadata dot add
department i t and we'll just say blob dot
blob clients dot set metadata and then we'll actually set the metadata so let's say metadata
dollar sign null i don't know why the second one is null i don't really care if we wanted to look it up we could
doesn't really matter but it does set the metadata because if we go back to our blob
do this now we have metadata on it just to make sure that it is working correctly we're going to just null it
out so if we go blob so and now we'll fetch it again
just so that we know remotely it's not like a local thing we'll go here
and so there you go so that's all i really wanted to show you and hopefully you get a little bit more powershell
experience there what we'll do is go to the resource group and clean up here we will go into this one
we will delete it give it a moment and there you go [Music]
hey this is andrew brown from exam pro and this fall along we are going to copy a blob from one container to
another using powershell so what i want you to do is go to the top here and we're going to type in storage accounts
and we're going to create ourselves a new storage account and the storage account is going to be called
well the resource group will be called my uh
blob copy powershell ps4 powershell and we will say blob ps
okay a bunch of numbers because the naming is always a pain because these are fully qualified domains review
create and wait a moment so you can go ahead and create we'll hit create and we'll
wait a little bit here it only takes a few seconds but while it's going we might as well open
up our cloud shell environment if this is the first time you've ever opened up cloud shell you will have to
um you will have to accept it for storage accounts and by
the way off screen i was doing this in another follow along i never came back to it but like if you want to have azure
installed you have to install module uh name azure the cloud gallery does this but this takes forever and then once
it's installed what you would do is you do a connect and that connect command would be i'm
just trying to find it here i think i would know it off the top of my head i do not it's like connect
a z account and then what it would do is it would prompt you to log in with microsoft it's
not going to work here on my local machine because i don't have something authenticated
and so i just don't want to bother with it but anyway we're going to do it here because it's so much easier over here
and so we'll go back over to our oops that's from an old follow along we'll go here and open this i guess open it here
it's fine we'll go ahead and create a new container so we'll say container one and then we'll go and create container
two and we will i guess expand this for the most part and we'll get to it here
so now that our containers are created we'll go ahead and make sure that we are in the correct
subscription this is something you always should do just to make sure the right place so
this is the subscription we're currently set i know it's already set correctly but just to go through the habits of
setting it we're going to set the context for the subscription to be explicitly this one
because that's something that will catch you off guard as being in the wrong subscription and so we need to set up
some variables we'll have resource group equals and we'll have to find the name
of it so we called this we'll go the overview here
my blob copy ps let's click off here i don't know why it's doing this weird
select sometimes it happens there we go
so that is the name of that then we need the storage account
name and so the storage account name is does it show me here no i can go up one layer
go the resource this is the storage account name oh boy
i always forget that you cannot uh do command v there you gotta paste
manually and so we'll get the storage key here and of course we could just easily get
it by going to the left-hand side here and clicking a couple buttons but again we want to practice our powershell so
we'll do get a z storage account key minus or hyphen resource group name
dollar sign resource group hyphen name storage account okay and if i've typed everything right
we'll hit enter uh i don't think i typed everything right
did i a parameter cannot be found that matches the name resource group so that doesn't
look right to be here and if we do storage key we can see we
have the key if we do zero we can get the first one if we do value we can get the value
which is something we will have to do and we set the context here for the new azure storage context so we'll do new
hyphen az storage context storage account storage account name
dollar sign storage account storage account key storage key
and oh we have to do zero dot value here and so now if we do
context it should show us a proper context here there we go
all right so now what we want to do is upload some kind of blob or actually we don't need to upload one we just need to
store one so i'm actually going to use an image i keep using the same image
in the in my lab i might use a text file instead but it's up to you what you want to use so in here i'm going to go over
to the container where are your containers and we will go to the container here and
we will click into it and i will upload
a new file so we'll say data pipe png upload and now that that's been uploaded i'm
going to type in get a z storage blob hyphen blob we call that data hyphen pipe because
he's wearing he has a pipe in his mouth in the video or in the image and we'll say container one
context dollar sign context and we'll hit enter it's because we've got the hyphen here
and we spelt the blob name wrong there we go so it's returning that so that is good
and the idea is we want to download it to our local storage so we can move it to the other one so for that we will
still have the context but what we'll do is set a destination and say here
oh you know it's get azure azure um storage blob content that's how we get
the content not context content and uh we'll say yes because i've done
this one before so it's already here if we do ls it's now in our cloud shell so the file is right here so you have like
a test file and just remove that test file and so now what we need to do is upload to the other container
so we'll do is do set storage blob content we'll say file
data pipe hyphen container container 2 context dollar sign context and we'll
give it some properties it's always good to set the content type
whoops content type no hyphen i always try to put a hyphen there because i always
think i'm doing um a header and we'll hit enter and so it says storage blob context is not right
because we messed that up it should be content and i forgot the hyphen over here
does not recognize set storage blob content set az storage blob content
um content type is invalid i gotta spell content right
could you tell i'm dyslexic i actually am dyslexic so i type a lot of stuff wrong like crazy
and i still am able to do cloud az storage blob hyphen blob and this is where we upload it so we
will do data pipeline here container
container 2 context dollar sign context
and so it says it's uploaded let's go verify if that is true so what we will do is go over to our
storage here we will go to the resources we will go to the
containers we will check container 2 to see if it's there and it's there so we are all done
let's go clean up we'll go to our resource groups we'll look for the resource group we had
created here this one is called what is it called i forgot what it was called
that or it's just not showing up so what i'll do is i'll make my way over to storage accounts because i don't seem
to be able to find it and we will go into where'd it go
i'm so confused what's our resource group called refresh
i'm going crazy because it was here a moment ago we'll go to resource groups it's right
here again this is this is azure for you sometimes you think something's there but it's not
there but it is there it just it's the fact that it has to propagate so it is in our account and so you have to have
confidence about what is there we'll go ahead and delete and that will delete
and there we go we are all done [Music]
hey this is andrew brown from exam pro and we are looking at azure active directory and this is a cloud-based
identity and access management service to manage users sign-ins and access to ad-related resources so azure active
directory is microsoft's cloud-based identity and access management service which helps you your employees sign in
and access resources so that could be external resources like microsoft office 365 azure portal sas applications or
internal resources so applications within your internal networking or access to workstations on premise and
you can use azure ad to implement single sign-on so you can see that azure id is basically like the the one solution to
log into everything and uh we actually use it at exam pro we use it with microsoft teams or uh you know for the
exam pro platform our emit is tied to it so when we want to log into the mid panel with credentials we have it there
we use it with aws to log into there and we use it to log into azure so it has a lot of flexibility and if you're
building out applications for enterprises they're likely using a d and so this is the reason why everybody
adopts it or needs to understand it so it's a service i really really do want you to understand and know azure active
directory comes in four editions we have the free tier and by the way each uh tier that goes up has the features
before it but free has mfa sso basic security usage reports and user management then you have the office 365
apps which is uh revolves around if you're using that suite so you have company branding sla two sync between on
premise and cloud and then the premium tiers which really comes into enterprise or
or on-premise hybrid architecture so hybrid architectures advanced group access conditional access premium two
identity protection and identity governance only thing i don't like about azure ad is that you can't really create
your custom access controls unless you have premium one in premium two but that's
just how they do it so there you go [Music] so let's take a look at the use case for
azure id and we basically covered it in the introduction but i just want to reiterate it in a different way with a
bit of a visual uh so that it really helps it sink into your uh brain there so azure ad can authorize and
authenticate to multiple sources so it can authenticate to your on-premise id to your web application allow users to
log in with ipds so identity providers could be like use
facebook or google login you can use it with office 365 or azure microsoft and so just a visual here
notice that uh we have azure id and using azure ad connect we can connect to on-premise through uh app registrations
we're able to connect our web application to azure id with external identities we can
use facebook or google uh a login and then for cloud applications we can connect to office 365 or microsoft
azure [Music] so active directory existed way before
azure and so let's just do a quick uh rundown of the history so we have an idea of what we're looking at so
microsoft introduced active directory domain services in windows 2000 to give organizations the ability to manage
multiple on-premise infrastructure components and systems using a single identity per user so it's been around
for 20 years and azure ad takes this approach to the next level by providing organizations with identity as a service
so i-d-a-a-s solution for their apps across uh cloud and on-premise and both versions are still used today because
they just have different utility and so we have active directory which is for on premise and then you have azure id which
is just the cloud hosted version and in many regards these can also be connected together
but there you go [Music] so i want to cover some active directory
terminology and the honest truth is that for azure you're not going to be uh too worried about these things but
they're going to come up in the documentation you're going to kind of wonder what they are and so i just
wanted to uh tell you about these up front even though they're not core to study uh so that it just really rounds
out your active directory knowledge because active directory is such a core service to azure microsoft products you
you should know these things uh so the first thing is all about domain so a domain is an area of network organized
by a single authentication database an active directory domain is a logical grouping of 80 objects on a network so
just think of it a way as you know how you have resource groups to logically group your azure resources domains are a
logical grouping for your 80 objects then you have a domain controller a domain controller is a server that
authenticates user identities and authorizes their access to resources very common to have multiples of these
because you want to have redundant uh domain controllers so you can log in or availability you're launching domain
controller nearby so people can log into different places uh so definitely uh very core to active directory then you
have the domain computer this is a computer that is registered with a central authentication database
and a domain computer would be an id object so then you have ad objects so this is
the basic element of active directory so you have users groups printers computers share folders etc then you have a group
policy object a gpo this is a virtual collection of policy settings it controls what an ad object has access to
you have your organizational units this is a subdivision within active directory into which you can place users groups
computers and other organizational units so it's just another way of doing logical grouping
then you have a directory service and this is this provides a method for storing directory data and making this
data available to network users administrators a directory service runs on a domain controller so there you go
that is the rundown of active directory terminology and again hopefully when you see it in the documentation you can
refer back to this or you'll have a better understanding of all the components i would have loved to have
made a diagram but i just couldn't find an example of one and so uh i mean i feel like there could be a really good
picture for all this stuff [Music] so let's talk about
the term tenant and a tenant represents an organization in an active directory and a tenant is dedicated to the azure
ad service instance a tenant is automatically created when you sign up for either microsoft azure or microsoft
intune or microsoft 365 and each or azure ad tenant is distinct and separate from other azure ad tenants and
so if you uh if you were in um azure ad and you clicked on your tenant information that's that's basically what
that is right so that's my exam pro one and it has its own special tenant id and we can see that it's licensed for
office 365 and so that tells you that i'm using the office 365 uh tier of azure ad
so remember that the domain controller is the server that users are going to be using to authenticate to the directory
service and so when you create an active directory azure sets one up for you but
there's some cases where you might want to set one up yourself and the reason why is that you could be like on like an
enterprise where you already have your own active directory on premise but you've decided that you want to move it
over to azure ad because you just want a fully managed active directory and uh you want to tap
into the cloud but the thing is that some domain services those are features on your domain controller just might not
be available and that's where you're going to need to set up your own domain controller and that's where azure active
directory domain services come into play because these provide managed domain services
and so they have managed domain services such as domain joins group policies uh ldaps uh
kerb b ross never can say that properly ntlm authentication and so the great thing is here you can have these domain
services but you're not going to have to deploy them manage them patch them they're just going to work so there you
go so let's take a look at azure ad connect and this is a hybrid service that is
used to connect your on-premise active directory to your azure account so azure ad connect allows for seamless single
sign-on from your on-premise workstations to microsoft azure and azure ad connect has the following
features we have password hash synchronization so this is a sign in method and it synchronizes the hash
of a user on on-premise ad password with azure ad then you have password authentication this is another sign-in
method that allows users to use the same password on-premise and in the cloud that's the one i like
uh then moving on from password authentication we have federation integration so this is a hybrid
environment using an on-premise adfs infrastructure for certificate renewal then you have synchronization this is
responsible for creating users groups and other objects and ensures on-premise and cloud data matches so you have the
same ad objects both on-prem and in cloud and then you got health monitoring and this is a robust monitoring
service and it provides a central location in the azure portal to view this activity and it's called azure 80
connect health so yeah the big takeaway is that azure ad connect is used to create a hybrid
connection [Music] so now let's take a look at some of the
80 objects starting with users so users represent an identity for a person or employee in your domain and a user has
login credentials and can use them to log into the azure portal so here i am a user and you can see it shows how many
times i've logged in and i'm part of different ad groups
and so you can assign roles and administrative roles to users you can add users to groups you can enforce
authentication by like with mfa you can track user sign-ins as you can see on the
right-hand side you can track device devices users log in and allow or deny devices
you can assign microsoft licenses azure id has two kinds of users we have users that's a user that belongs to an
organization and guest users this is a guest is a user that belongs from another organization and we'll cover uh
azure 80 roles in the roles section here because that is what's that you're going
to be using to apply to these users [Music] so groups in azure id lets resource
owners assign a set of access permissions to all members of the group instead of having to provide the rights
one by one and so on the right hand side here you can see i have a bunch of groups in exam pro and groups can
contain owners and owners have permissions to add or remove members and then the members have rights to do
things okay and so for assignment you can assign roles directly to a group you can assign applications directly to a
group and to request join groups so the group owner can let users find their own groups to join instead of assigning them
to them and the owner can set up the group to automatically accept all users that join or require approval this is
really great when you just want people to do the work themselves as opposed to having to do all that manual labor of
adding them to groups let's talk about how we're going to give users rights to access resources and
there are four different ways to do that the first is direct assignment this is where the resource owner is going to
directly assign the user to the resource then you have group assignment this is where the resource owner assigns a a
group to the resource which automatically gives all group members access to the resource then you have
rule base assignment this is resource owner this is where the resource owner creates a group and uses a rule to
define which users are assigned to a specific resource and then you have external uh authority assignment this is
this access comes from an external source such as an on-premise directory or sas application and i just want you
know that there's four different ways to do it so to get access to resources
[Music] let's talk about external identities so external identities in azure id allows
people outside your organization to access your apps and resources while letting them sign in
and use whatever identity they prefer so your partners distributor suppliers vendors or other guests can bring their
own identities such as google or facebook you can share apps with external users
that's for b2b stuff if you develop apps intended for azure ad tenants uh for singleton or
multi-tenant you can do that as well uh you can develop white label apps for consumers and customers this would be
like azure ad b2c so there you go hey this is andrew brown and welcome to
the az104 follow along uh and the first video in our journey on learning on how to be a good administrator is learning
about azure active directory tenants and so tenants is the way we are going to group all of our users
within an organization so this is like the logical division of organizations within uh
azure and so let's just make our way over to where tenants are and they're under azure active directory a.d
we'll click there and so the idea is that when you first created your azure account you already have a single tenant
and that is your organization so here i can have i see i have one here called the hush nook
and down below i have a license applied to it so this is the azure 80 premium p2 but the idea is you can create multiple
ones and so these can act as different organizations and really what a tenant is it's just an instance of azure active
directory uh that's exactly what it is underneath but let's go ahead and go create a new one and we have two options
here we can create uh uh an azure active directory and i think what really is implied here is that it's a b2b uh when
you read the documentation they talk about b2c and b2b b2b means business to business meaning that you're joining
businesses and other businesses together and then b2c means business to consumer and so up here in the description they
say uh the b2c enables users uh to access applications published by your organization and share administration
experiences all right so what we'll do is we'll go to the next step here and we're gonna need to name our
organization um i'm gonna name it starfleet and
we need our initial domain notice it's going to be on microsoft.com so we will just lower
case that if it's already used it might complain we'll see if i can take that and it's already taken so we'll say um
uh the uss starfleet uh oh it must be numeric take that out there
there we go uh u and three s's that's not easy at all but that's okay i'm going to change my default location to
canada i think it still deploys in the us anyway so it's still going to say united states
where the data center is going to be we're going to go ahead and hit review create and give it a moment
it's saying i'm missing some information there already used by another directory so i
still can't call it that boy oh boy so i'll just say starfleet um uh 1984.
and i think we should be able to do that and we'll go ahead and create it and so you're gonna have to play around and
find the name that's gonna work for you and then we'll just wait for it to create this tenant shouldn't take too
long here [Music] and after waiting there a couple of
minutes down the right hand side it says it created the tenant so we can just proceed
back here and so i'm just going to go back to azure active directory at the top i find it just easier to always
search and go back this way and so we'll just wait a moment for this to load
but we're still in our our hussnick azure active directory that's my default one there but let's say i wanted to go
ahead and switch a tenant i can go now and just switch and so here i have my exam pro one hush nook and starfleet i'm
going to go ahead and switch it over to this one and so essentially now i have all these
isolate users so i can have different users in my in these other tenants and that's the way you would isolate them
out notice that this one is azure ad free uh and so the thing with
azure active directory is that the higher up the tier you go the more functionality you have and one that is
very popular especially with enterprises is going to have the the p2 tier premium two i just want to
show you how you'd go about upgrading to that it does cost money but you also get a free trial i don't believe that you
are billed uh during the the time of the trial uh at the end of it then you have to enter in or make an explicit purchase
of it let's go ahead here and see if i can do it um it might not give me trial because i'm already running another one
but let's go find out so i clicked on licenses on the left hand side we'll go to all products and then we have try and
buy and so on the right hand side it says azure 80 premium 2 and we can click on free trial and it's going to tell you
why you'd want p2 such as multi-factor authentication policy driven management and end-user self-service
and so i'm going to just go ahead and upgrade that you don't have to do that you can just watch me do that here and
so i'll just hit activate it's just going to go activate that in my account and now it's activated so if i go back
uh to azure active directory here i'll probably click home but i'm just going to go back this way
it's a lot more safe that way eventually it should update here so it says azure 80 free
but really i've upgraded it's just the delay in the console and so after a few refreshes it will
probably show up but what that's going to do is it's going to enable some additional features
they don't really matter so much for the course i mean maybe i could show dynamic role assignment if i try to remember how
that works um let me think for a moment i think it's under rolls but anyway uh so
you know that's how you do that um i just wanted to show you how to create a tenant and then just able to switch
between them there [Music] so we create ourselves a tenant let's go
ahead and actually create some users and just before we proceed forward to create a user i just want to point out you can
tell what tenant you are in if you look in the top right corner so here i'm in my starfleet tenant and there's actually
i think you can switch between here yeah that's an easier way to switch between your directories uh but
generally that's not how i do it i'll go to the top here and just type in azure active directory
and when i want to switch a tenant i just click the switch tenant button and click on what i want but we'll make our
way back to the starfleet tenant that i have here here you can see my information and i've obviously activated
azure 80 premium p2 but before we go ahead and create a user we're going to have to create a group
and i'll show you why if i'm going to go ahead i'm just going to speed through this for a second if i scroll on down i
have this option where i need to create a assign it to a group i guess i don't have to but the problem is i don't have
one and so before i create my user let's go ahead and create ourselves a new user group and so we'll go ahead and create a
new group here and we have two options we have security and microsoft 365. it explains right
here the difference so 365 is really for giving access to mailbox calendar files sharepoint we're not doing that stuff
here we're just sticking with regular security groups which is for azure stuff we'll name this developers
and then we'll name this developers and notice here that uh the membership type i have a drop down if you're on the
free tier this is going to be grayed out but uh this dynamic user is part of p2 and so this allows us to add a dynamic
query and the idea here is that if i start having users and i just say if a user
is from canada then they'll automatically be added to this group and so that's the query that
gets outputted but we're not going to make a dynamic group today we'll just make our way back here to the
new group and we'll just manually assign stuff all right i'll go ahead there and create that user
our that group there and now that we have that group let's make our way over to users so we'll go back here i'm going
to make myself a new user and on the top here we'll go here and we'll name this one kevin
and we'll have that lower case and we will call them kevin uxbridge all right
and you'll notice here it will auto generate a password it's four letters and four numbers i don't find this
personally very secure but the idea here is that a user is going to reset that password right away so it's not a big
deal and that's very easy to remember and now i can go here and assign groups and if i want to sign the role i can do
that here so we'll open that up we have a bunch of different options maybe we want these two roles here
and then we have some additional information i'll go ahead and create this user and so now this user exists
what's interesting is that i can go ahead and delete users and then recover them the groups have this as well where
if you go here and i actually have a user i deleted previously so it just takes some time to show up
but if you go ahead you can hit the delete user here and then uh
they will remain in here for uh 30 days and then they'll automatically be be deleted so if someone made a mistake
this is a great opportunity to bring them back and so i can just go check box on richaun here and we can just bring
her on back restore that user all right and again you know this is sometimes a bit delayed so we might have
to hit refresh there she is and so uh you know that is the whole group and
user stuff there [Music] okay now let's take a look at the
concept of guest users so let's make our way back to users within our tenant and there's this button here for new guest
users the idea between behind guest users is that we're able to invite other users from other tenants and this
is a much simpler process than using federation federation is the concept of joining active directories together so
if there is an on-premise or other external active directory they would somehow want to connect the two together
and there's a lot of administrative burden with that but with if everybody's already using azure active directory
it's as simple as adding a guest user so what we're going to do is go back to home here i'm just going to go back to
azure active directory and we're going to switch tenants i'm going to go back to my hush nook one here
and you probably won't see that that's just showing up because it wants me to use mfa
and i'm going to go here to users and we're going to go ahead and create ourselves a new user i'm going to call
this one a hush nook and we will name it simply that and let's make note of the actual uh
email there that's pushnuk onmicrosoft.com and we'll go ahead and create that
and now what i'm going to do is i'm just going to copy that email there
okay i think i got it yes i do good and we're going to make our way back to
azure active directory we're going to switch tenants back to starfleet
and we are going to go to users and we're going to add a new guest user i'm going to invite a user
and i'm just going to put in their email there and we will say hush
and uh we'll just put that both and we you here we we can assign them to groups and roles it's all the same story
here i'm just going to go ahead and invite them and so now they've been sent off an
invitation notice that they show up under as guest all right so there you go [Music]
let's take a look at bulk operations let's say we needed to bring in a bunch of different people into our actual
account here and so we have this option of bulk create and if you look on the right hand side we can go ahead and
download a csv template that's ready to use i'm going to double click that i already have excel installed here
and off the bat what you can see is we have a bunch of options and that we can fill in
and here we have one example there and so i think that we could try to give this a go here but we're going to have
to make sure we use the same principle username or here thing here so i'm just going to grab this here
and let's just make a few people so we need our principal here we'll just say picard
whoops did not like that i copy that and we'll do up here we'll say data
card data card picard
data data just see what else i need we're going to need a password i'm just going to use
the same one here set that as no i don't think any of the other things
there are needed we'll go ahead and save that and i'm just going to save this
somewhere convenient okay and what i'm just going to do is pull
this up off screen here i'm just looking for where i saved it just give me a moment
and there it is i'm just going to drag it on in here i think i can drag it in like that
maybe not as easy just go back to uh it's under my downloads
here we go and we'll just hit submit we'll give that a moment shouldn't take
too long you can probably view the status here if we click that there and now we're just
under bulk operation results i'm just going to give it a refresh you can see that it's not completed yet
i can click into it and we'll just have to wait a little bit here okay
oh it looks like it just succeeded i'll give it a nice refresh it's going to tell me how did it turn out to success
zero failures total requests let's click into that so we can see what it did that looks pretty darn good to me
so that is pretty good we'll make our way back to our users and you can see that they're on mass
imported so there you go [Music] let's take a look at how we can enable
multi-factor authentication for our users and so multi-factor stands for mfa you're probably used to it where it's a
secondary step to confirm your identity before logging in either via a phone or a hardware device such as an ubi key
and so there's this button here this is multi-factor authentication and it could be disabled for you it very likely is if
you're on the uh the free version of your azure ad just going back up here one level
you can tell based on what that is here so i'm using azure 80 premium p2 that's definitely going to allow me to have mfa
so if you just don't want to turn on p2 or it's trying to charge you money don't worry about it just watch me do it here
you pretty much learn what you need to know so we'll just go ahead and click that button it's going to bring us to a
different screen here and here we have our users and you can see multi-factor authentication is
turned off for all of them so we can go ahead here and turn it on for a single one if i click enable
and so that enables it for kevin then we have some other additional options like manage user settings and here you can
see we have some extra options so require selected users to provide contact methods again delete all
existing app passwords generated by the selected users restore multi-factor authentication on all remembered devices
so just some additional things another thing you will want to check out here it's not very obvious but we have a
service settings button up here and this will give us more additional options to enforce for our users so
first we have app passwords so this allows users to create app passwords to sign into non-browser apps so things
that are not part of the website here and so you can white list some ip addresses for them uh so that they are
trusted from those locations then for verification options this is something that's important we have some options
like call to phone so it will actually to do the mfa will actually call your
phone and and you'll tell you the letters and numbers that you enter in or it can send you a text message or it can
notify you through the mobile application so you can install the companion app
from the android or ios store or the windows store i imagine there's a windows store too um and then there is
the verification code and mobile app on our hardware token so hardware tokens could be with an ub key right
then you can say remember a multi-factor authentication on trust devices for x amount of days that way they don't have
to enter it all the time because it might get annoying for some users honestly for me i want everyone to
enter it every single time because that's just how i am but let's make our way back here and see how we can do some
bulk assignment so what i'll do is i want to enable for a few people here so if i go up to bulk update
i can download a sample file and that's going to go ahead and download that
and you can see i actually did this previously here so i'll just copy it and what we'll do is just paste that in
there so we're just pasting in yeah that's fine we're just pasting in
uh their their name right and the mfa status you can always get the name from here just back on the list there
and so once we have saved that file we can go ahead and upload that they're the same here so it's not a big
deal and we'll just wait a few seconds here this is pretty darn quick
usually doesn't take minutes sometimes it takes seconds but we'll just give it a moment here
and there we go so that was a long wait i don't know why but i verified the two there and so if we go next now those
will be now enabled there's also this option to enforce so let's go there uh after multi-factor authentication is
forced users will need to create app passwords to use non-browser applications such as outlook or links
and so that's a great option to have so i'll just go ahead and enable that as well
so now that's enforced and so that's all you really need to know about mfa [Music]
okay so we're back in our tenant here and what i want to do now is actually set some
abilities for users to reset their password so they don't have to come and bother the administrator all the time so
under your tenant on the left hand side what you want to look for is password reset
and now we have a bunch of options here for resetting password so we're going to have to go ahead and
enable it we can enable it based on a per group level such as developers but what i'm going to
do is set it for everybody here which i think is a lot better option and then down below we have a lot of
things we can do here so looking at authentication methods we can choose the number of methods required to reset
so maybe they need to use two options in order to reset their password which is a lot more secure
and then we can checkbox on possible methods they can use then we could require a certain number of
questions uh required here so three and three you can see i was setting something earlier here you can create
your own custom questions and then so we'd go ahead here and just add a new one so
um you know what is the best um you know song
we'll go ahead and hit add and the idea is that you can add uh them there there's also a bunch of predefined ones
so if you have a hard time think of any you can definitely checkbox on some here right
and so now we have a large pool of questions that can be used there we'll go ahead and save that
then under registration we can say require users to register when signing in
i think that's a great option to have there and then you can have number of days before users are asked to reconfirm
their authentication method and that's defaulted to 180 days so i would probably leave that as on
and we can notify users on password reset notify all admins when other admins reset their password very useful
this was probably turned off so i'd go ahead and turn that on there you can customize a help desk link so
the idea here is that uh when someone's having trouble uh they might say contact your administrator well how do you
contact administrator will you provide that information either a url or an email so i've just provided my email
there there's some on-premise integrations never used this stuff before because i'm never working with
on-prem but you know that stuff is pretty straightforward as well and you have
usage and insights so you can actually see when people are resetting their passwords and things like that and what
is being used um so that seems pretty darn good but that's pretty much it to it um yeah so there you go
[Music] hey this is andrew brown from exam pro and we're looking at the azure active
directory cheat sheet this one's three pages long so get ready for it and let's jump into it so active directory just
active directory alone which is a d it's microsoft's identity and access management service helps your employees
sign in and access resources then you have azure ad and this is the same thing except it's the cloud-based version
and one term that you'll hear a lot with this is identity as a service which just means that it's like the serverless or
hosted version of this so you don't have to think about it or manage servers azure directory comes into four editions
you got the free one the office 365 apps premium 1 which is also known as p1 premium 2 which is p2 it's good to know
the differences between what features you have here so you should know all the details in between
here so i highlighted conditional access because that's an important one on the exam to know azure ad can authorize and
authenticate to multiple sources so if you're doing on-prem you're going to be using azure ad connect if it's for a web
app you're using app registers registration sorry if you're using facebook or google you're using external
identities if you're you can also connect to office 65 or azure microsoft okay
for active directory terminologies uh when we're talking about a domain domain is an area of network organized by a
single authentication database an active directory domain is a logical grouping of 80 objects on a network then you have
domain controller commonly abbreviated as dc domain controller is a server that
authenticates user identities and authorizes their access to resources you have a domain computer a computer that
is registered with a central authentication database a domain computer would be an ad object then you
have 80 objects this is the base element of active directory uh things like users groups printers etc you have a gpo this
is a virtual collection of policy settings then you have organizational units this
is a subdivision of your ad which you can place users groups computers etc in you have directory service
such as active directory domain service adds which provides a method of storing directory data and making the data
available to network users a directory service can run on a domain controller that's only page one we're on to page
two so a tenant represents an organization a tenant is dedicated uh it's a dedicated azure 80 service
instance a tenant is automatically created when you first sign up either with microsoft azure
intune or 365. each ad tenant is distinct and separate from other tenants when you perform a lift and shift
of 80 to azure not all the features are supported in that case you're going to be using adds so adds provides managed
domain services such as domain joins group policies lightweight directory access uh and i never can say that word
so i'm not going to try azure ad connect has the following features it has password hash
synchronization pass through authentication federation integration synchronization health monitoring you've
got the concept of users so that's an ad object remember so an identity of a person or employee in your domain a user
has login credentials and can use them to log into the azure portal azure ad has two kinds of users we got a user
that belongs to your organization and guest users a user that belongs to a different organization groups let
resource owners assign a set of access permissions to all the members of the group instead of having to provide the
rights one by one and the group contains owners and it contains members just a moment here
there we go just want to make sure i'm recording the videos that i don't end up finding halfway i didn't do it so we got
assignments so you can assign roles directly to a group and you can assign applications directly to a group we're
on to the last slide here in the cheat sheet told you this one was long request to join groups
the group owner can let users find their own their own group to join instead of assigning them the owner can also set up
set the group to automatically accept all the users or join to require approval there are four ways to assign
resource access rights to your users direct assignment group assignment rule based assignment external authority
assignment know all four of these please and so yeah there we go we're at the end here and that's all there is
so let's take a look at the type of azure roles and i was honestly first confused with azure roles because i'm
from aws and uh we only have one kind of role but azure happens to have three clients and there's definitely good
reason for it so let's break that down and the first one is classic subscription administrative roles and
this is the original role system now it's not something you're going to be really using but you should know about
it because it still does exist then you have azure roles this is the authorization authorization system known
as role-based access controls or rbac and it's built on top of azure's resource manager and then thirdly we
have azure active directory also known as azure ad roles and these azure id roles are used to manage uh azure 80
resources in a directory so let's jump into it [Music]
so let's first talk about access controls or iam so im stands for identity access management it allows you
to create and assign roles to users and so this is the general interface for im in azure
and so we're going to have azure roles which is part of that uh role-based access system and so roles
restrict access to resource actions known as operations and there's two types of roles for the azure role based
access system you have built-in role so this is managed managed by microsoft and they are read-only and they're
pre-created roles for you to use and then we have custom roles and these are a role created by you with custom logic
and so the place you're going to find it there is you're going to see it says roles right there and that's
what we're looking at down below is a bunch of roles so you see owner contributor reader those are predefined
rules for you and so then you have role assignment and this is when you apply a role to either
a service principal a user group a user and so that's going to be under uh the tab there such as you applying a role to
somebody and then you have deny assignment so these block users from performing specific actions even if a
role assignment grants them access and the only way to apply a deny assignment is through azure blueprint so this is
just something where you're just setting up like guard rails to make sure that certain things are never used regardless
of a role is applied [Music] so now let's take a look at classic
administrators and this is the original roll system and you honestly want to be using the new rbac system whenever
possible uh but the idea here is that um you're gonna go uh it's in the same place that we just saw the access
control and there's a tab called classic administrator and in there you can set up uh
administrators here and so you have three types of roles you have the account administrator this is the
billing owner of the subscription and has no access to the azure portal you have the service administrator so this
is the same access of a user assigned the owner role at the subscription scope and they have full access to azure
portal and then you have the code administrator so this is the same access of a user who is assigned to the
owner role the subscription scope so it's those very three simple roles and we actually have one because uh when
baco who's the other andrew set up our azure account i think it just sets you up one so even though you're not really
going to be using it i do believe that it still sets you up one when you make your account
so that's just what it is but there you go [Music]
so let's take a closer look here at azure role-based access controls because this is something we're definitely going
to be using a lot and so role based access control helps you manage who has access to azure resources what they can
do with those resources and what areas they have have access to and the idea is that you have a user and you want to
assign them a role so you can use a role assignment and a role assignment's made up of three elements you have the
security principle the role definition and the scope and we're going to look at those three things in a little bit more
detail here in a second and there are four fundamental azure roles which we are going to learn and then azure rbac
also includes over 70 built-in roles which we definitely do not need to go into great detail
so let's take a look at one of those three elements first which is the security principle and this
represents the identity requesting access to an azure resource and when we say identity that's just like a loose
term for something and that something could be a user in your azure active directory a group which defines a set of
users in your azure active directory a service principle so a security identity used by applications or services to
access specified azure resources or a managed identity an identity in your active
azure active directory that is automatically managed by azure so service principle basically in azure
service and then managed identity is something in your azure active directory then we'll move on to scope and a scope
is just a set of resources that access that assess the role of the role assignment applies
to and so scope access controls at the management subscription or resource group level so what does that mean and
we we have another slide on this i can't remember what section it's in but you have this breakdown of scope where you
have management groups subscriptions resource groups or resources so when you're saying i'm setting a scope you're
saying what is the scope is it on a management group is it on a particular resource is it a resource group and
that's what we're trying to say there and then last the last element there is a role definition and this is a
collection of permissions so a rule definition lists operations that can be performed such as read write and delete
and roles can be high level like owner specific or like a virtual machine reader and so azure has uh built-in
roles and we said there were four fundamental built-in roles and here they are it's owner contributor reader and
user admin administrator so you want to know those four and then across the board you have those three
operations read grant and then create update delete so you can see the owner can do everything the
contributor can both read and create stuff they just can't grant access to other people the
reader just has read only access and then a user access administrator is granting other users
privileges but themselves are not creating anything all right [Music]
so now let's take a look at the third type of roles and that's azure ad roles and so azure 80 roles are used to manage
azure ad resources in the directory such as creating or editing users assigning administrative
roles to others resetting user passwords managing user licenses and managing domains and so on the right
hand side here what we can do is go into azure active directory in azure and under roles
administrators we can see there is a bunch of roles predefined for us like application administrator application
developer etc there's a lot in there and a few important built-in roles that you should know is the global
administrator so this gives you full access to everything the user administrator full access to create and
manage users the billing administrator makes purchases manage subscriptions and support tickets
and i want you to know that you can create your own custom roles but for whatever reason you have to uh
purchase this this to me is a bit unusual because in the aws world uh this is something you don't have to pay for
but i guess azure active directory has been around for a long time and so they have this whole big tier
in there and so if you want to be able to make custom roles you have to upgrade your azure ad to premium so premium
could be p1 or p2 and i'm assuming the higher the number the more controls okay [Music]
so let's take a look at the anatomy of an azure role and as you saw in the previous slide it does cost
money uh to be able to have the privilege to create
custom roles you'd need an um an active directory premium version one or two but it's still a good practice to go take a
look at what the contents of an actual role is because you can actually look at the manage rules open them up and and
see what they do and you should really look at them and not just take them
based on the uh face value of the name of the role so i want you to know that uh
azure rolled documents have two different syntaxes whether using powershell or cli and so for the example
here on the right hand side is gonna be using powershell and it's very simple and it's just the property name is going
to change so see where it says name uh if you were using powershell i have in parentheses would be role name okay so
if you look one up and it's using json and the names are different you're going to have to do that translation all right
so let's just quickly go through the properties here so the first thing is the name that's going to describe what
the role is then you have the id this is not something that you're creating it's going to be auto-generated for you then
you have is custom whether it's going to be custom or not the description that's self-explanatory and then the actions
this is what you care about because the actions tell you exactly what you're allowed to perform
uh so that's a big list there it's pretty
self-explanatory but then you have not actions and this is basically you explicitly saying you're not allowed to
do these things it's just a guardrail to make things safe do you really need and not actions i don't know that's just how
they designed it then you have uh data actions this is an array of strings that specifies data operations that are
allowed to be performed within the data within that object and then you have not data actions so things you're not
allowed to do with data and then assignable scopes and this is an array of strings that specify the scope for
the custom role and you can only define one management group for assignable scopes for a custom
role i do want to just point out one other things with these uh
these roles and really just has to do with these asterisks just if you're not familiar with them but that usually
indicates a wild card permission it's like saying match everything and so
when you're doing that it's going to match things like you can use it in actions not actions
data actions and not dated actions and so again it just matches everything in that sub thing because you might have
a bunch of options you can put there and so for example uh what it's doing like let's see see where
it says management or cost management and it has action read write delete run action all that stuff in red would
be matched with that asterisk and i know it doesn't match the examples on the right hand side but you just gotta
imagine that there's other things there okay now let's just do a quick comparison
between azure policies and azure roles uh just to make sure that we're very clear what the difference of these two
things are so azure policies these are used to ensure compliance of resources and azure roles these are used to
control access to azure resources so on the policy side these are going to evaluate the state by examining
properties on the resources that are represented in resource manager and properties of some
resource providers they do not restrict actions and we call this operations ensures that resource state is compliant
to your business rules without concern for who made change or who has permission to make the change even if an
individual has access to perform an action if the result is a non-compliant resource azure policy still blocks the
create or update actions there okay on the roll side it's very simple it
focuses on managing user actions at different scopes and it does restrictions
it does apply restrictions on azure resources okay so azure resource or azure roles
controls what you have access to azure policies ensures compliancy [Music]
all right just another comparison here i want to compare azure 80 roles to the azure roles the rbac versions so ad
roles they are used to control access to a.d resources and azure roles control access to azure
resources and here's a good representation here where you have 80 roles on the left-hand side and azure
roles on the right-hand side and the idea on the left-hand side that actually represents like office 365 but
you can see azure ad has some coverage both inside and outside of
azure so an ad resource could be something like users groups
billing licensing application registration or etc
and then on the right-hand side for azure resources this could be virtual machines databases cloud storage cloud
networking and etc by default azure roles and eight azure 80 roles do not span azure and 8 and azure id and by
default the global administrator doesn't have access to azure resources so
that's just good to know and global administrators can gain access to azure resources if granted the user access
administrator role so the azure role so there you go [Music]
hey this is andrew brown from exam pro and we are looking at the azure roles cheat sheet and when we say azure roles
cheat sheet we're actually talking about all the roles not just the azure rules okay because there's three types you've
got the classic subscription administrator roles you have azure roles which is also known as role based access
controls that's built on top of arm and then you have azure active directory roles okay
so yeah you need to know the difference between the three when we're talking about identity access management iam
that allows you to create and assign azure roles to users roles restrict access to resource actions also known as
operations there's two types of roles you got built-in roles managed managed microsoft roles are
read-only pre-created rules for you to use and then you have custom rules a role created by you with your own custom
logic for rule assignment is when you apply a rule to a user and a role assignment is composed of a security
principle role definition and scope azure's four built-in rules are owner contributor reader and user access
administrator i don't have the table here but you should know the difference between them but usually they're pretty
self-explanatory based on their name for classic administrators we have three types of roles we've got account
administrator service administrator and co-administrator for important azure 80 roles you need to
remember you have global administrator user administrator billing administrator and on the exam you might even see more
kinds there so i would encourage you to go look up all the different kinds or i might end up making a video going
in more in depth for some of them in this course you can create custom azure 80 rolls but you're going to need that
p1 or p2 so there you go that's azure rolls [Music]
hey this is andrew brown from exam pro and we are taking a look at azure key fault so this helps you safeguard
cryptographic keys and other secrets used by cloud apps and services azure key vault focuses on three things so the
first is certificate management so easily provision manage and deploy public and private ssl certificates for
use with azure and internal connected resources key management so create and control the encryption keys used to
encrypt your data secrets management store and tightly control access to tokens
passwords certificates api keys and other secrets and certificates contain key pairs key and secret not to be
confused with key management and secrets management i do have to list that out here because
you'll notice here and i'm getting my pen tool out we have certificates keys and secrets but within a certificate it
can contain keys and secrets but just understand that there are three isolate offerings that um azure key vault has if
this was another provider such as aws these would all be isolate services but uh you know a common pattern with azure
is they like to group a bunch of uh functionality under a single uh service and so you know these things which which
should be three services or actually under all azure key vault but there you go
[Music] let's talk about hardware security models and fips because this is
important for azure key vault so hardware security modules also known as hsm is a piece of hardware designed to
store encryption keys here's an example of one called
gemalto i'm not sure how to pronounce it but the idea is that these are pieces of hardware that are extremely expensive
but are extremely specialized for holding in keys in memory not even writing them to disk so that there's no
chance of somebody stealing those keys if they were to take the device and then there's the standard of fips also known
as the federal information processing standard so this is a u.s and canadian government
standard that specifies the security requirements for cryptographic modules that protect sensitive information and
so hsms are they can be multi-tenant uh and if they are this is where uh we will be using
fips 140 hyphen 2 level 2 compliant so you have multiple customers virtually isolated on a hsm device that's kind of
a way of sharing the cost of one of these very expensive machines and then you have hsms that are single tenants so
this is uh where you if you need to maybe meet level three compliancy where you have a single custom customer
dedicated on a hsm so there you go [Music] let's take a look at what a vault is for
azure key vault so a vault stores secrets and keys that can be protected either by software or by fips 142 level
2 validated hsm so an actual hardware security module azure key vault provides two types of containers vaults so this
supports software and hsm backed keys and then you have hsm pools so only support hsm back keys and to activate
your hsm you will need to provide a minimum of three rsa key pairs up to a maximum of 10.
specify the minimum number of keys required to decrypt the security domain called a quorum
and one key thing you know is you do not choose the container on creation you choose between standard and premium when
you choose premium and create rsa key pairs you'll be able to begin to create hsn pools because when i was going
through the follow alongs i was like hey where are these hsn pools and so that's where i found out about these
requirements we're not going to do that because it's just really expensive to do and it's out
of the scope of the exam but you know if you are an organization and you're thinking about using hsms those are some
of the requirements there okay [Music] all right let us quickly talk about the
key vault api so it's a rest api for pragmatically managing key vault resources allowing you to perform
operations such as creating a key a key or secret important key or secret revoking a key or secret deleting a key
or secret authorizing authorizing user or apps to access its keys or secrets monitor and manage the key usage so
here's kind of a diagram and the idea is that uh via the api you'd use the cli to access the api uh it would go into your
vault and you know if you had a certificate there a certificate could contain metadata key and secret and you
could operate on uh that kind of stuff there so the key uh the key vault rest api supports three
different types of authentication manage identity so identities managed by azure id this is the recommended practice
because it requires it introduces the least amount of risk service principle and certificate so this is where you use
the certificate service principle and secret where you have a user and secret key so there you go
so when setting up our vault we have some very interesting recovery options and so i just want to give this a little
bit more emphasis of these interesting things and so here it is soft delete days to retain deleted vaults and purge
protection so soft deletes allows you to recover or permit delete a key vault and secrets for the duration of the
retention period it's enabled by default on creation there is a way to not have soft defaults
and i think we discovered that during the follow along i just can't remember this moment here not that it's super
important mandatory retention period and so the mandatory retention period prevents the permanent deletion of key
vaults or secrets prior to the retention period elapse so just keep them around just in case then you have purge
protection so when enabled prevents secrets to be purged by users or microsoft
so yeah there you go [Music] let us take a look at pricing for azure
key vault so it comes in two pricing tiers standard and premium so the key difference here is premium allows for
both software and hms protected keys where standard does not have hms protected keys uh so let's just take a
look at the pricing in particular so for rsa uh 2048 that's the size of the key but keys uh you're looking at a cost of
.03 cents for 10 000 transactions and that's the same thing for hmhs protected keys if you have an enabled with
exception that you have a dollar per month per key for advanced key types so we're looking
at rsa 3072 4096 so longer ones or ecc uh keys
we're looking at a a greater cost here so um 15 cents per transactions and you're going to still pay that 50
protractions but in the hsf protected you're going to see that it's going to have this kind of
tiered thing so for the first 250 keys it's five bucks for the next range of keys from 251 to a thousand five hundred
it's two dollars and fifty cents and it gets lower and lower with volume some more keys the more savings you'll have
some other things here would be things like secret operations and this is this is the same for both software and hsm so
there's no difference in cost here but the idea here is that there's a cost for secret operations at zero three cents
for certificate renewal it's three dollars per request and then we have managed azure storage account key
rotation one dollar per renewal manage hsn pools is dollars and twenty cents this might vary in terms of price based
on your region uh but you know i'm just kind of want to show you that there is an additional uh cost for just the keys
when you have hsn protected okay [Music] all right we're taking a look at keys
for azure key vault one of the three core offerings one being keys secrets and certificates keys we use it to
encrypt things uh utilizing with azure services so like if you need to encrypt azure disks or apply it in a variety of
things where we have encryption at rest um you know it's useful for this because it'll just easily integrate with those
azure services so when creating the key there are three options we have generate so azure will generate the key import
import an existing rsi key restore backup so restore a key from backup for keys that are generated by azure you
can either use rsa or ec sometimes we see people say ecc but uh in the ui they called it ec rsa is an
algorithm and it's actually in a combining of three names of cryptologists or uh the people that made
the algorithm and you'll see there's a few different lengths 2048 3072 4096 that's the amount of bits that's how
long it is then you have ecliptic curve cryptography and then it says p hyphen and we have 256 384 5
5 2 1 two 256k i'm gonna assume that also has to do with length um i don't know exactly how it translates over to
bit size but uh i'm sure that's similar for keys generated by azure you can set an activation expiration date probably a
good practice because that forces you to rotate your keys after a period of time you can create new versions of keys
so that's a cool benefit you can download backups of keys backups can only be restored within the same as your
subscription and an azure key vault i think we do that in the follow along uh so for
premium vaults you do have additional options for hsm and it's not that fancy it's just like
rsa and ec are repeated here but with hyphen hsm to specify it's going to the hardware security module you can
generate you can import for the hsm there are microsoft managed keys also known as mmk these are keys managed by
microsoft that do not appear in your vault and in most cases are used by the default for many azure services so even
if you don't think you're using keys you actually are and so the way
if you're using like aws you actually do see them in a key management service but azure decides not to show them and just
have it by default there for you customer managed keys cmk are keys you create in azure key vault you need to
select a key from a vault for various services so sometimes a customer manage
managed means that the customer has imported cryptographic material uh uh but when we're talking about azure when
you generate or import a key they consider that a cmk and i'm pointing that out because you know i use other
clouds like aws and gcp and for aws cmk indicates imported cryptographic material not um
not exactly how azure describes it here in order to use a key as azure service
needs an identity within azure id so for permission to access the key from the vault um for infrastructure encryption
this is sometimes an option by default uh azure encrypts storage accounts data at rest
infrastructure encryption adds a second layer of encryption to your storage accounts data and we'll talk about
double encryption because i was like what's double encryption and so then i had to make some slides here for us okay
[Music] all right let's talk about double encryption so if we're talking about
storage accounts there's this option called infrastructure encryption so if we checkbox that on uh this isn't
encrypt storage account data at rest infrastructure encryption adds a second layer of encryption to your storage
accounts data we just said that in the last uh lecture here but i'm just trying to cement that this is for storage
accounts for azure disks it's called double encryption and so double encryption is where two or more
independent layers of encryption are enabled to protect against compromises of one layer or
of encryption it's the same thing but for one they called infrastructure encryption and for the other they call
it double encryption confusing i know but it's the same thing uh
okay using two layers encryption mitigates threats that come with encryption data of course so i just feel
like we're repeating ourselves a hundred times here but microsoft has two-layered approach for data rest and transit so
for data rest you have disk encryption using customer managed keys and then you have the infrastructure encryption using
platform managed keys data in transit the transit encryption uses tls 1.2 that is the latest standard there's a
over the years it went between ssl tls it's back in tls's court with 1.2 being the most up-to-date additional layer of
encryption provided at the infrastructure layer so should you enable double encryption yes and always
every time you have the opportunity utilize it okay [Music]
all right let's take a look here at azure key vault secrets so one of the three offerings that azure key vault has
so secrets keys and certificates azure key vault provides secure storage of generic secrets such as passwords and
database connection strings so here's an example of the form we have uh the name of the secret the value
um so pretty clear how that works key vault apis return and accept or accept and return secret values as strings
internally key vault stores and manages secrets as a sequence of octets so 8-bit bytes with a maximum size of 25k bytes
each yes 25k not 256. key vault services doesn't provide semantics for secrets accepts the data encrypts it stores it
and returns a secret id or sorry a secret identifier so an id for highly sensitive data clients should
consider an additional layer of protection for data encrypting data using a separate protection key provided
to storage in key vault is one example encryption data using separate protection key so i
guess you could encrypt the data and then upload the encrypted material uh there so i suppose that might be what
they're suggesting key vault also supports a content type field for secrets so clients may specify the
content type of a secret to assist in interpreting the secret data when it's retrieved a maximum length of this field
is 285 characters all secrets in your key vault are stored encrypted key vaults encrypts secrets at
rest with a hierarchy of encryption keys all keys in that hierarchy are protected by modules that are fips 140 hyphen 2
compliant the encryption leaf key of the key hierarchy is unique to each key vault encryption root key of the key a
hierarchy is unique to the security world protection levels varies between regions and we're talking about china
here so china uses fips 140 hyphen 2 level 1 and all the regions uses level two or higher i apologize for that
spelling mistake there we can apply secret attributes alongside our secrets um and so we have exp that's
the expiration time we have nbf not before so default value is now the time before which the secret
data should be retrieved enabled whether the secret data can be retrieved uh and we saw some of these options in the
actual form itself uh there are also other read-only attributes such for created and uh created an update updated
i think it was supposed to say updated there in order uh to access secrets within your application code you would
use the azure sdk as an example so here we're importing uh the key vault secrets this looks like
um c sharp this is c sharp code sure oh yeah it
says over here net okay and so we import azure identity uh key vault secrets
we're defining our our options here so delay max delay retries um i don't think that really matters too much but the
idea is that we are providing a uri to the actual vault and then
we create our credentials we create our client and this example is just getting the secret so we're not um
we're not using this to uh insert a secret but just to kind of grab it programmatically
you can also use the cli so we have a z key vault secrets show and i'm pretty sure that's what we are using during the
follow along but yeah there you go that's secrets [Music]
all right we're taking a look here at x 509 certificate so before we can talk about those we need
to talk about what is public key infrastructure also known as pki so pki is a set of roles policies hardware
software and procedures needed to create manage distribute use store and revoke digital certificates and manage public
key encryption so if there's a cloud service that manages certificates it's a pki
so what is x 509 certificates it is a standard defined by the international
telecommunication union itu for public key certificates or certifications so x-509 certificates
are used in many internet protocols extremely commonly used with ssl tls for https
signed and encrypted emails code signing and document signing so a certificate contains an identity so
hostname organization or individual a public key so rsa dsa ecda etc and then there's the idea of a
certificate authority so this is an entity that issues digital certificates a
ca or certificate authority acts as a trusted third party trusted both by the subject owner of the certificate and by
the party uh relying upon the certificate so there
you go [Music] all right let's talk about uh chain of
trust for 509 certificates so a certificate authority ca can issue multiple
certificates in the form of a tree structure also known as a chain of trust so here is an example for the exam pro
dot co website um and uh you know this was issued by amazon and so amazon is the root uh the root authority here and
then they have their intermediate certificate which issues us the certificate that is used for
our https tls communication to keep encryption and transit to the website
let's talk about what a root certificate or root certificate authority is so the root ca
is a self-signed certificate and its private key is used to sign other certificates so i'm going to talk about
self-signed just means that they've generated a key and they've done it on their machine there and that's what they
mean by self-signed because they're not reaching out to anybody else in order to say that the certificate is valid it's
important that the private key of the roots are protected uh so like these are the most important keys and to protect
them that's why we have this intermediate certificate authority also we like to call these icas so
intermediate certificates are signed by the root private key and act as entities that can issue certificates they protect
the root certificate because the root certificate does not have to sign every issued certificate then you have your
end entry certificate these are the ones that we're going to be using so a certificate issued by the ica used by
the end entity the entity in the case of an ssl certificate for a website so ssl tls
it's just common to say ssl even though tls 1.2 is the common standard when we were making the videos here but there
you go [Music] all right so we're taking a look at the
certificate format so here is the certificate information for the exampro.co
certificate that we used for https for the data in transit to make sure that our
information is secure if you open it up you can see there's a bunch of metadata and so let's talk about what's in here
so we have the version number the version number is uh you know indicating that it's an x509 standard
notice here it says three so we're using version three we have a serial number a unique
serial number assigned to the certificate by the certificate authority so it's this big long thing here we have
the certificate algorithm id used to sign the certificate uh you know could be rsa dsa in this case is suggesting
that it is rsa 250 uh with sha 50 shaw 256 with rsa the issuer so the name of the civic authority the issued uh
issued the certificate to so that's the issuer name up here but this one is saying
organization amazon things like that so it doesn't say my name might be down below here somewhere
uh yeah i don't see my name in here but you know there's something here uh validity period so how long is this
certificate going to be valid for the start and end date the subject uh yeah that's what we want the issue would be
amazon because that's what is i guess i was looking for the subject so the subject is who it is to be issued to so
that's where we'd want to see exam pro but it gets cut off here so we just can't see it
the subject public key so the public key that is meant to be authenticated by the certificate
so this field is also named also names the algorithm used for the public key generation
then we have other fields like issue unique identifiers subject unique identifier extensions which allow you to
associate private information to a certificate but the clear thing to understand is
that all this metadata is publicly readable so anyone can generally view it that's uh the purpose of that is so that
we can have the means to validate the authenticity of these certificates so let's just talk about some things
that can package along with a certificate so that there are ways of validating it so we have this metadata
and this metadata gets produced in a hash a hash is a means of taking a bunch of information and basically
turning it into a thing that looks like a bunch of random characters but it actually is a fingerprint because that
hash if you take the same information it'll always produce the exact same patch so it is a fingerprint to identify
the information so you know that's that there the that hash or fingerprint is going to then get
signed with the private key the private key that is generated by the certificate authority and that way it
guarantees the authenticity of that fingerprint and then the idea is that when you are
signing when you are signing that hash it's basically creating a
signature okay so that's that authenticity when you sign it you know it's valid
so the certificate metadata the signature and then the public key provided by the uh the end user is
packaged there and with all that information we have enough information in order to validate the office
authenticity of the certificate okay [Music] all right let's take a look at um the
extension name for digital certificates because it can be crt cr pamper der it there's so many varieties it's very
confusing so let's see if we can unpack some of the complicated history here and make it a little bit more clear so
we have the first is pem so privacy enhanced mail and the idea of this format is that it's base64 ascii you can
open them up and they're very easy to read very easy to edit and it's the most common format for x
509 certificates uh and for certificate signing requests and cryptographic keys so the pem format
normally have the following extensions crt pamser and key key specifically for private keys
but there's no hard rule about it so just because something is pam doesn't mean it's going to end with dot pem then
you have distinguished encoding rule so dur this is a binary encoding so if you open up the file it's a bunch of binary
code you won't be able to make sense of it edit it do anything with it uh it's used for both x509 certificates and
private keys der normally have the following extensions so d-e-r-c-r then there are certificate files that's
where we have sir and crt these are base64 ascii so they're basically i think they're basically pen and sir and
crts are interchangeable extensions so you can either have they basically stand for certificate file right so that's
what it's saying so it's like hey c-e-r is a short form certificate file crt is a short-form certificate file right um
so that's where they get confusing so either cer or crt these are generic file extensions
then you have the personal information exchange pfx this is a microsoft certificate format
uh this is the successor so pkcs pound 12 is the successor pfx and so pkcs hyphen 12 is uh we'll use
either dot p12 or pfx extensions so there is a lot of confusion about the variety of
formats it's just because there was a lack of standardizations early on and so
everybody was just trying to create their own formats or their own keys and they weren't being consistent
and that's why we just end up with all these weird naming conventions so half the time you cannot tell what something
is you have to open the contents of it you have to know what you're using it for
and just you know just be aware that it is confusing even for me i've been doing it for like what 20 years and i still
can't tell what's what so that's just what it is okay [Music]
hey this is andrew brown from exam pro and we are taking a look at certificate signing requests also known as csr is a
message sent from an application to a registration so a certificate authority of a private key infrastructure in order
to apply for a digital identity certificate the most common time you would deal with a certificate
signing request is when you need a ssl tls certificate so that your website has https so it's data in transit
so the idea is you'd submit a certificate signing request to your certificate authority so like if you're
getting your certificate from godaddy amazon i'm just i'm thinking godaddy actually has certificates but like
geotrust site or something you know uh you know you have to submit a csr so a csr contains a public key
and application information such as a fully qualified domain name because it like if you're getting a a csr for
a domain name they're going to need to know the domain name and so that's the two things that get packaged together so
the idea is you create your certificate signing request you request the certificate to the
certificate authority the certificate authority will issue you a certificate if it deems that uh it likes your
request but there you go hey this is andrew brown from exam pro and we are taking a look at certificates
in azure key vault so that's one of the three keys secrets and certificates and so if you're wondering why we did all
that x 509 certificate stuff it's specifically so that we are prepared to understand
uh you know the purpose of certificates in azure key vault so azure key vault allows you to import generate and manage
x 509 certificates so the idea here is you can uh you know generate one you can self-sign one you can uh you use one
with a um a certificate authority but let's talk about some other things here so key
vault partners with certificate issue providers for tls and ssl certificates so we have digicert and globalsign those
being two very large companies you can generate self-signed or through a
certificate authority there's no need to manage the security of the private key key vault takes care of it for you it
allows the certificate owner to create a policy that directs key vault to to the manage to the managed life cycle
of a certificate this allows certificate owners to provide contact info for notification about lifecycle events for
expiration renewal or certificate it supports automatic renewal with selected issuers so key vault partners 5x5 x 509
certificate providers or certificate authorities so there you go [Music]
all right let's talk about the composition of a certificate with an azure key vault and so we use this uh
graphic earlier to describe uh the rest api integra uh interacting with just the certificate part of azure key vault but
what it did show is the three components that are part of certificate that it will be stored in azure key vault so
there we have um azure key vault which allows the operations of keys okay then we have key vault
secrets so allows the retrieval of the certificate value as a secret certificate metadata so
x509 certificate data so if that's not clear let's just get our pen tool out here so we have the key
we have the secret and then we have the metadata okay so those are the three components that
are alongside the certificate and so here we have an example of a certificate that
i self-signed i generated in there and i just want to show you that you can download the certificate here and either
sir format cer or pempfx format and so you see where i'm like is it ascii is it binary what's it going to be because
you know when i downloaded it i didn't really get the exact result that i thought i would get but remember that
the pem can be used as cer so the format format offering can be confused here but what can help us understand
is what we set the content type when we generate our certificate so um like cer will be ascii format i
believe um but like when we're doing with pemb or pfx the difference is going to be based on
the content type that you choose there so if you choose the content type to be pkcs
pound 12 it will use that format if you use if you choose pem that means that it will download in the pem format same
thing with the cer so just realize it is a bit confusing and there's that additional option there and
um azure really really should have not made these two buttons that confusing it should just be clear what they are but
that's just one of those things that uh azure trips you up on okay [Music]
all right we're taking a look here at certificate policies so typical policies allow you to set and configure your
certificate such as the content type the lifetime of the certification the key type the key size and various other
options but it's calling it an issuance policy so there's some inconsistency of naming by azure between their
documentation and their ui which is no surprise but we have a bunch of options as we saw so the type of the authority
whether it's self-signed and things like that the subject the validity period the content type whether it's enabled the
percentage lifetime which i'm not sure what that is we have advanced options where we can say the key type the key
size things like that issuance policies only effect certificates that will be issued in the future so it won't modify
the existing certificate but the ones that will be generated in the future okay
[Music] hey this is andrew brown from exam pro and this fall along we're going to be
learning all about azure vault so let's get to it so what i want you to do is go in the top here and
type in key vault and here we'll have to go ahead and create ourselves a new vault
and so from there we're going to create a new resource group i'm going to call this resource group my example vault
and then we will make a vault key here so i'll just say my vault example which is kind of funny because this
one's slightly different so you've seen i've done this before so i'm going to do my example vault
as the name here and for the region usc is fine for pricing we'll keep it at standard soft
delete is enabled and then there's the option for purge protection
so we are going to enable purge protection and uh this is going to play into other
follow alongs we'll explain that as it goes but purge protection does not allow you to
purge things uh easily once it's enabled so what we'll do is go ahead and review and create
and we'll go ahead and go review create and we'll give it a moment here and we'll just wait till it's done
deploying okay all right so after a short little wait our vault is created and so what i want
you to go to the resource and we're going to be using this vault a little bit in some of the follow alongs and in
some cases not so much okay [Music] hey this is andrew brown and this fall
along we're going to be doing some things with keys with an azure key vault so what i
want you to do is make your way over to the keys blade on the left hand side here we're going to generate or slash
import a new key we're going to choose the generate option in terms of naming we're going to call
this my disk key and we're going to choose
rsa2048 that seems totally fine to me everything else seems okay so go ahead and create that
key so give it a moment to create doesn't take too long and then what we're going to do is go on the left hand
side to im access controls and what we want we're going to want to do
is add a new role assignment so we can go ahead and start using this key so what i want you to do
is go and look for key vault administrator which is here and we'll go ahead and hit next and then for our
user we will choose ourselves so under user i'm going to select the members i'm looking for the account i'm using there
i am and you're around go ahead and select that there and so that is all we need to
assign it so that we can actually uh work with that key so i think a good idea is to use a key
to encrypt a disk so what we'll do is make our way over to disk encryption sets because before you can encrypt a
disk you need to have an encryption set so go ahead and create ourselves a new encryption set we'll call we'll use the
uh sorry the same um a resource group so it's very easy clean up afterwards we'll call this my
disk encrypt set
here and in terms of the encryption type we're going to use double encryption
because that's a much better you have two keys that encrypted so that's a lot better we are going to choose our vault
so we have my example vault there's only one option here and in terms of the key we'll just
select my disk key in terms of the version uh we'll select the current version we'll go ahead and hit review
create and then we'll go and create that and we'll give it a moment to create
that encryption set shouldn't take too long here and after a short little wait uh our
resource should be deployed only took about a minute for me if we go here it's going to have this message up here it's
very small but it says dissociated disk image snapshot this disk encryption set you must grant permissions to key vault
so all we have to do is click that alert and we'll grant permissions and so now we are able uh to use that key
um or like to to we're going to have the permissions issues is solved so what we'll do is go to type and create a new
disk and so we can apply this key to that encryption so go ahead and create we're going to choose the same resource
group here i'm going to call this my example vault and or sorry my example
disk because that's a little bit more clear than that and for the availability zone doesn't
matter for the source type it doesn't matter as well in terms of the size we
want this to be cheap we're not really using this for real so we'll use standard hdd
and we'll say okay in terms of encryption this is where things get fun we go to double encryption we choose our
key here we'll go ahead review and create and we'll just give it a moment for that
to oh we'll hit create and we'll have to wait a little while here for the create that resource so we'll just wait until
that is created okay and after a very short while the disk is ready so we'll go to that resource we'll
go to the encryption tab to see that encryption is applied so that's all it takes to use a key to encrypt
a disk so we are going to still use some of these accounts there's no cleanup yet i'll go back here and i'll see you in
the next one [Music] hey this is andrew brown and this follow
along we're going to learn about backup and restore key so what i want you to do is go back into the
resource group that we just recently created and we're going to make our way over to keys so i'm just sorry we got to
get into the vault first then we'll go over to keys and the idea is that we have this key
here and so you can see that we have this current version so you can add additional
versions but what's going to happen if we try to back this up so when you back this up you're going to get this file
here and if you open up this file it's going to look like a bunch of gobbly goop so i'm just going to try to open it
here i have it up off screen here so i'm just trying to open it up within uh visual
studio code so i'm just going to open up visual studio code again doing this off screen here
just give me a moment all right and so this is the file that we encrypted
and you take a look here and it's it doesn't look like anything but the
idea is that it is our backup of our keys so that we can re-import that and just taking a look at the key name
this is what it looks like so this is my example vault my disk key then there's this
date and that's key backup so just recognize that's the format and the date is very useful to indicate when you
backed it up so let's go ahead and delete this key because the idea is we want to
restore that backup and so we have deleted that key there
and uh what we're going to do is we're going to attempt a restore so i'm going to go ahead and go
occurred while restoring the key the key you're trying to restore already exists why would it
throw that error we've clearly deleted it and the reason why is that we have purge protection on we did that in the
first first part when we set up this actual vault here i'm going to see if we can
find the settings wherever that purge protection is i'm trying to remember where it is purge protection is enabled
so we can go here and once you enable it you cannot turn it off it's going to retain it for a certain amount of days
and so all you can do is soft delete keys so this key is not actually deleted yet
if you go to manage deleted keys you can see the key is over here and if you try to click on purge it is disabled because
we cannot remove the key because we have purge protection on but we can recover the key so we'll go ahead and recover
and so that will allow us to recover the key
and if we refresh here it's going to take a little bit time for that key to restore so we'll just have to wait a
little bit and then it will show up there's one other thing i wanted to show you was under policies because you know
um if you go under where's policies here um or access policies if you look under our
user here and we look at the key permissions there is an option to purge and we don't
actually have that turned on right now but if we were to save this and we were to still go to that purge
option it would still say the same thing so even if you have purge permissions it does not matter if purge protection is
turned on it still will not let you purge but you would need a combination of those in order to
uh you know be able to do things there so to really show you how to do that recovery i think what we should do i'm
just going to delete our old key here because we don't care about it but we are going to well i guess we could try
to import it into the other one so i'm just going to undo that for a second but we are going to go ahead and create
ourselves another vault so i'm going to go and type in vault at the top here
and we're going to be a little bit more careful when we create this vault so we'll go here and we will choose
my example vault i'm going to say my vault no protect and the pricing tier will be standard
one day we're going to leave it well 7 is the lowest and we'll say disable purge protection because we don't want
to have that enabled and we'll see if we can import the key into another vault i'm not sure if we
can do that worst case we'll make a new key download the key re-upload it but i'm just curious what would happen if we
tried to upload the same key as it's still in another vault i'm not exactly sure
all right so this deployment is successful i'm going to go to this resource and go ahead to go to create
and we're going to restore from backup and we're going to take this key and see if we can actually
import it here so it looks like we can take a key and it can exist in multiple vaults i'm going to go ahead and delete
this key and we're going to say are you sure you want to delete this key i'm going to say
yes and if we go to manage keys and we refresh it takes a little bit of
time here so we'll just wait a moment for this to persist and after a short little wait like about
two minutes i refresh and the key is here so if i go here you'll notice the purges option is still not available we
can obviously recover um but we don't have purge um protection on so if we go to access
policies over here and we go ahead and scroll down and select purge and save our changes we can then go back
to keys we'll give it a moment to save we go back to keys we'll refresh it we'll manage our keys
and we'll go ahead and purge it and that will permanently purge it there so that's all it takes
to do that so there you go [Music] hey this is andrew brown from exam pro
and this follow along we are going to learn about key rotation so what i want you to do is make it back to the vault
that we were just in the no protect vault and we'll go over to keys and we'll have to create a new key so i'm
going to create a new key here called my new key and it's going to be rsa 2048 and we're going to go ahead and create
that key so the idea of key rotation is that if you have uh if you think your key has been compromised or you have a
policy like a company policy that you should rotate them for security purposes after a while
you can easily rotate out keys so in order to rotate a key it's as simple as going to
your rotation policy here and we can say rotate now and that will immediately rotate the key as you'll see there is
now the current version and the older version so that was the key we had before this is the new key we can also
set up a rotation policy so if we go to rotation policy here we can set the expiry time to
let's put the lowest number in here one so this has to be at least 28 days and then we want to enable enable it and
then we want to automatically renew after a certain amount of time so here if i put seven i'm trying to put
the lowest number so it goes one and i try seven it says the value cannot be greater than
21 days because there's kind of a minimum and maximum both having a buffer i don't exactly understand the math but
i know that um if we change it to something like
i'm thinking here expiry time 40 days and there's a larger gap this should be less of a problem
so it cannot be greater than that 21. i i knew the math here a moment ago i could have swore it was 28 and 7.
so just well i'm not sure that this is broken let's go ahead and save that
try 40 up here and hit save value cannot be greater than 33 days
okay 33 oh boy so let me
all right so the problem was i was in months and we'll have to go to days here so here if we put seven it works fine
but there is a bit of work to figure this out and it will tell you so you'll have to figure it out we'll go ahead and
save that um oh yeah it prompts again so the uh the time before expiry cannot be greater than 21 days cannot be greater
than um okay so we put the expiry to 40 we save so there is
a bit of work there i don't exactly understand all the math there but i guess it's a way so that you have less
problems but that's key rotation in a nutshell so there you go [Music]
hey this is andrew brown from exam pro and this follow along we're going to learn about secrets so go back to your
previous fault the no protect fault and we'll go to secrets here and we're going to go ahead and generate a new secret
and we're going to use this secret within an application so we're going to have manual we'll say my secret
and make sure you spell right so my secret secret
there we go and for the value i'm going to say hello mars it's a secret value that's the whole point of secrets for
the content type we're going to say text plain so it knows that it's text and everything else looks fine so go ahead
and save that and we will easily create our cells a secret here so
the secret is now available now we just need to use it in some kind of application so what i'm going to
recommend is that we go use a git pod template so we're going to say gitpod template.net
core cli c sharp c-sharp there we go and from here we'll click into it
and uh what we want to do is use this template and we will have it private it's totally
fine i'm going to go to exam pro here and i'm going to call it i don't know azure
vault secrets dot net oh did i use that before so we'll do new if you don't have to do new that's great
i'm going to go ahead and create this as a private repository from a template so it's going to set us up with a base
core.net application i think.net 6.0 um it's just going to make things a lot easier for us so you can click the
button down below actually probably not because i'll go to the wrong repo but um
what you'll want to do here is uh if you don't have the chrome extension installed all it's doing is
it's just putting this in front of it to launch this get pod environment so get pod is has a free tier so it's very
useful to use especially for net because we do have to do a bunch of setup for it so i'm not really good at net so this is
the approach i like to take um and the idea is that we just close this and it cleans itself up so it's really really
nice so we'll just wait for the image to build and for the environment to run and then
we'll get to the next step okay all right so after a couple minutes this environment is ready and i'm just going
to go ahead and hit the x there go to terminal and we're going to make sure this dot net app works we're going to do
net run and this is just a basic hello world application so we can see that it prints out so we're going to need a
couple packages we're going to use net add package to add these here so azure identity
and then we will do dotnet dot net add package azure security
key vault secrets and the idea is we'll need to modify this file in order to load some stuff
so i can't remember if i wrote this myself or if i got it from an azure azure documentation but we'll write it
out because it's not too much to write out as using azure core using
azure identity using azure security key vault secrets
and then under the program we want to modify our static void main so we're going to need a new string
and this is going to be called secret name so my secret actually we have a bunch
here and i think this is where we probably should set environment variables so key vault
name and i'm just going to actually i don't know how to set environment variables
in.net i don't want to look it up so i'm just going to write it out and hard code it because it's not going
to really matter if we do that so i'm going to say my vault no protect that's what we called it my vault no protect
yeah and for
uh the key value uri we're going to do this so we'll say https colon forward slash forward slash
my vault no protect dot vault dot azure dot net i think we need semicolons on the end
here otherwise it complains a bunch um he vault name
var secret client options options equals new secret client options parentheses
and this i think is a function so we'll open this
up here like that and we will try we'll have to set this to retry don't worry if you don't know
what's going on because it honestly it doesn't really matter it's it's just like you know you just need to get some
experience writing some code and eventually it starts to sink in because it really takes a lot of time to
learn code you just have to spend a lot of time doing it so that we're just going through the
motions without having to think about it too hard so from second 16 and i'll explain it
once we write it all up here but yeah it's very good practice to just write out code
max retries is five and we'll say mode equals retry mode dot exponential um and i feel like we have to put
semicolons in dot net otherwise it complains so that is that
part and then we'll need a client so we'll say new secret clients parentheses new
uri key v uri comma
new default azure credential parentheses options semicolon
uh key vault secret secret equals client get secret parentheses secret name semicolon
and down here we'll do console console dot write line
get secret colon get secret
plus secret value uh value there i didn't mean to spell vault capital v on the value it's giving
me some trouble here semicolon and then the idea is we're going to want to enter the secret yeah
i'm pretty sure i took this from uh the azure docs i just don't remember where because i know that
this looks very familiar and uh but i think i reorganized it so that it was a little bit more streamlined i think
there was a little bit more going on with theirs and so we'll say string
secret value lowercase secret value
equals console read line parentheses semicolon client set secret
secret name secret value [Music] and
we can do console right set secret
semicolon on there we'll do another console log here this will be for the key so let's give this a couple spaces
and we'll do plus secret uh name i can spell this one right and we'll do secret value
we'll do another console right we're doing lots of these console rights here actually it'll just be pretty much the
same lineup here so i'll just go grab this one here client start
delete secret secret name
console right line start delete secret start delete
secret plus key vault name and then we'll go get it one more time
so let's take a look at this i mean hopefully we didn't introduce any mistakes by typing this all up by hand
but uh the idea and these are supposed to have semicolons on the end because i can see a little red mark
there so the idea is that we are importing uh core identity and key vault secrets
identity so that we can get access to stuff we're providing our secret the name of our vault the url that it needs
to hit to do that um here it's going to get the secret client options and so it's going to have a bit of
retries here if it doesn't work out properly down below we have the secret client so we're passing in the key value
uri and then we need our credentials here we didn't type anything out for credentials
so i'm a little bit surprised by that but maybe that's not an issue and so what i'm going to do here is i'm
just going to go ahead and commit all this actually before we do that we're going to create ourselves a get pod yaml
file so git pod init and it says it won't already exist so i'm just going to say no there
but what we need to do here is install the cli the azure cli so i'm going to make a
new task here called azure we'll say init and then here it will be curl hyphen
s capital l https colon slash aka at ms aka is like azure's microsoft's short
um shortener so we'll say azure cli deb
just have install azure cli deb sudo
bash okay so that should install it i'm just going to double check to make sure this i typed this out correctly
and we will say allow and paste that in there yeah it's working so what we'll do is
we'll commit our stuff here add that there so
commit code and i guess we do want to run this line
so i'm going to go here and run it because the way it's going to get the credentials it's going to grab it once
we authenticate with the azure cli because it'll set something locally and that's why we didn't have to pass in a
key or anything and maybe before we do that let's just double check our permissions
so if we go to the im control here we go to our role assignment or actually um
access policies we'll go to angie brown secrets get lists
so we should be able to do it we have all the permissions to do it so now that that is done we're going to go a z
login hyphen hyphen use device code and hit enter
and so here we have this address here that will help us log in so we'll go here to this address we'll copy this
code we'll enter the code in we'll say oops that's not what i wanted
to do we'll paste that in again and we will hit uh sign in here and we'll say continue
and that is all done there so now what we can do is try to run the application so we're going to type in
net build to build this application see if we get any errors if we get errors i'm not really
surprised because we had done so much typing here but we will go back to our code and take a look here so on line 38
um there's no semicolon a very small mistake so i don't mind
that if that's all the mistakes we wrote that'll be great fix
semicolon missing semicolon and we will commit nsync dotnet builds
and all projects are up to date so that looks pretty good to me we'll do run and
if it works that'll be great i don't know if we explained the whole program but we'll just run this here
and it should run oh you know what it's prompted us probably
no there we go so we'll type in hello world uh hit enter
great so let's take a look at what this did so the idea is that we have a client and the id and we want to grab the
secret so that's the secret name we have up there my secret and if we console log it out it gives us hello mars because
that's what we set and then we are prompted to enter our own input
and we use this to set the secret and then we immediately fetch and say what is the key what is the value now if you
notice it says my secret here hello world and but when we go get the secret again
um down below let me just see here so we have and the reason why it's not printing out as expected is because we
don't have this is something i'm not a huge fan of in net but you actually have to add the
um backslash end like that otherwise it gets a bit confusing here as you can see
i'm a little bit confused about our output so we have set secret
key value my secret hello world that's new value and then it goes down below and we
print out the secret value again from the actual value object so these are the ones that are just locally that we set
right but this is the actual object but the object is still saying hello mars but we updated up here and the reason
why that happened is because it takes time for those changes to propagate so if we were to call it again it would
have the right value but you know it has to replicate it and so that takes a bit of time so sometimes
you have to wait uh for the stuff so we go ahead and we delete it and then we go start delete secret so we say that we
are deleting it and then we go get the secret but we've deleted the secret so how can we see the secret if we delete
it and again it just has to do with propagation if we were to go into our vaults and to go to secrets that secret
should not be there anymore so just consider that you have to do that i think the original tutorial had like
sleep commands i don't know what this loop command is in um in
net but it's something probably like sleep you know
dot net sleep right so they were running those in between that this that's like two
seconds a thousand milliseconds and so they're doing that to make sure the change is there but i thought it was
better to show you the fact that you have to wait a little while um but that's it there so we are all done here
so we can go over to our resource group and we can delete uh this resource script and all the
resources in it all right and we can close that off so there you go
[Music] hey this is andrew brown from exam pro and we're taking a look at azure app
configuration this allows you to centralize all your app settings in one location is ideal for multi-environment
or multi-geographic geography applications because it provides a dynamic way to change application
settings without having to restart them but in general if you're building out any kind of application
it is actually really useful it also integrates with azure key vault which stores application secrets which we'll
talk about soon enough azure app configurations main benefits are a fully managed service that can be
set up in minutes i would say seconds to be honest flexible key representations and
mappings tagging with labels point in time replay of settings dedicate ui for feature flag management which is to me
the best part of azure app configuration comparison of two sets of configurations on custom defined dimensions enhanced
security through azure managed identities encryption of sensitive information at rest and in transit
native integration with popular frameworks and when we say popular frameworks we're talking about things
like net core asp.net net java spring other javascript node.js python azure functions.net core so
no rails no larvael so no next to so i'm not sure what they're saying when they say popular frameworks quotations but
you know you can still integrate them because they do have language support but there you go
[Music] all right let's take a look at the tiers for azure app configuration and we got
to here we got free and standards let's see what the difference is so the first is resources per subscription one for
free unlimited for standard for storage per resource 10 megabytes for free versus 10 or sorry one gigabyte for
standard revision history seven days versus 30 days for request quotas a thousand versus 30 000 per hour no sla
on the free tier first security functionality this stuff is important you get encryption with microsoft
managed keys uh hmac and azure ad so active directory authentication role-based access control support
managed identity service tags so a lot of stuff in the free tier but in the paid tier you get customer managed
managed keys of cmks uh private link support and then for the cost it's free for the free tier of course and for
standard it's 120 per resource with uh 0.06 per 10 000 requests with 200 000 requests are included in a daily
charge so you do get you might like look at those requests and say oh that stacks up but you get
that 200 000 uh free uh daily um and for soft release is not supported in the free tier pricing okay
so soft delete mean like i deleted something and i didn't actually want to delete it so let's bring it back it's
like putting in the trash can and pulling it right back out but there you go
[Music] all right let's take a look at configuration explorer this is where
you're going to be looking at your data for the most part that you've inserted and so the nice thing is that you go
here you can see the key and the value even though it's hidden if you click values it will show you them all which
for me doesn't feel super secure but i guess that's how it works again it just gives me a feeling of
not being secure but it is secure um and the idea is that you'll be able to create a key value but actually it's
either key value pair or a key vault reference and this is the way that you're able to reference uh secrets
within your vault so there's not really much to it it's not super complicated
this view is fine it does what it says it does but there you go
[Music] all right let's take a look at feature manager for azure app configuration this
is the meat and potatoes this is the thing that you really want to be using as your app configuration for it's for
feature flags so what is a feature flag a feature flag provides an alternative to maintaining multiple feature branches
in source code a condition within the code enables or disables a feature during runtime and this makes it easier
to roll back or do a to b testing for new functionality so azure app configuration feature manager allows you
to add featured flags which can be then accessed via code so the idea is you say enable a feature flag you name the flag
whatever you want so i called flash cards assuming i need a flash card feature
you can label them of course and then you are providing code now i didn't see
much for things outside of net and or well c-sharp and java so i'm hoping that their sdks or there's some kind of
library for ruby python node.js things like that but it might just be only for c sharp so net
apps and java now let's take a talk let's talk about an advanced feature of feature manager which is feature filters
this allows filtering of features so a feature filter consistently evaluates the state of a feature flag and a
feature flag supports three types of built-in filters targeting time window and percentage which is confusing
because when you go and turn it on it says targeting time window and custom so the thing is is that
both targeting and percentage is within the targeting thing and then custom is its own thing so that is those little
inconsistencies that microsoft likes to present you in their documentation so if we were looking at a time window and
i think this is really cool but you could say okay this feature starts on this date and it's between this time and
i guess starts on and ends on that time so it's like if you just want to run a feature for you know 10 days you could
do that then there's a percentage thing so the idea is that you say okay i'm going to
define two different groups and for group a they're going to get 50 of this feature in group b they'll get 50 of
this feature so it'll change the amount the feature is available uh per per group like how many in that
group will get it and then targeting would be like you could say i want it for these users so when they say
targeting it literally just says user and you put a name in it's and it and it's just a a text value it's not like
it's linked to an azure active directory user anything like that and then they have custom filters which can be created
based on different factors such as device use types geography location etc when i clicked on it it
wasn't very clear on how to do custom filters but the idea is that you can go beyond these ones if you want to uh you
know have different values so there you go [Music]
all right let's take a look here at the microsoft graph api which exposes rest apis and sdks to access data from the
following microsoft cloud services the first is m35 core services where we have bookings calendar delve excel m365e
discovery microsoft search onedrive onenote outlook exchange people planner sharepoint teams to do
workspace analytics uh so basically a lot of services for enterprise mobility and security we have
advanced threat and analytics threat protection which is atp azure azure 8d identity manager intune then we have
dynamics 365 business central for windows 10 services we have activities devices notifications universal print
and for support sdks we have it for android ios angular asp.net go javascript node.js java php powershell
python and they had ruby but i i looked into it and um they it doesn't work anymore and they say they're going to
make a new one but i'm still waiting for it so i always get the short stick when it comes to ruby
and the data is accessible via the unified endpoint at graph.microsoft.com but how do we use
this stuff let's take a look here so for node.js we would install the microsoft graph client and then here as an example
we are sending an email so the idea is getting my pen tool out here we create a client we're defining uh some mail and
then we're calling our endpoint um via the api that's basically how you're always going to be doing it post that
message and that's how the graph api works so there you go [Music]
all right let's take a look at microsoft graph connectors so connectors offers a simple way to bring external data into
microsoft graph and enhance m365 intelligence experiences so you might want to build a custom connector to
integrate with services that aren't available as connectors built by microsoft using microsoft graph
connectors rest api and you can use the microsoft graph connector api to create and manage external data connections
define and register the schemas of the external data types ingest external data items and microsoft graph sync external
groups and microsoft search indexes all your m365 data to make it searchable for users with microsoft graph connectors
so your organization can index third-party data so it appears in the microsoft search results so you're like
okay where's we have microsoft graph connectors where are they and that's
where i ended up on the connector gallery so microsoft has a gallery website containing a hundred plus
connectors it's at microsoft.com for its microsoft search forward slash connectors and i was a bit confused at
first but now i understand what it is but the idea is that they have a bunch of different connectors from a bunch of
different third-party providers so microsoft has some but there's a lot from a bunch of different providers some
example connectors could be amazon s3 by ba insights azure blob storage by accenture google cloud sql by ba
insights ibm connections by raytheon google drive iba insights azure devops by microsoft and connectors might
not be free and their instructions greatly vary based on the implementer so this is not like a unified thing it's
you go there and it's like some other app that's used for it so um yeah it's kind of weird because it just
doesn't feel very standardized but i guess it's just kind of a library of services that can connect things kind of
like zapier but as if there was all these specialized services by individual third-party cloud vendors
but there you go well let's take a look here at microsoft graph data connect which
augments microsoft graph's transactional model with an intelligent way to access rich data at scale
so they're just meaning about getting data out of microsoft and putting it into some kind
of tool that can transform it do ai on it things like that so microsoft graph data connect uses azure data factory to
copy n365 data to your application storage at your at configurable intervals and the way this is going to
work is you're going to have to go to the microsoft graph data connect under your m365 admin center go to
settings work settings and services and from there you will turn it on and the idea once it's turned on then you can go
over to azure data factory and by adding it as a data set to your pipeline you now have the ability to access your data
and if you know how data factory works then you know you can do a bunch of stuff with it store it in
a blob storage send it to synapsis do whatever you want from there okay
[Music] hey this is andrew brown from exam pro and we are taking a look at microsoft
graph so this is a gateway service to programmatically access m365 windows 10 enterprise mobility plus security and so
here's the big graphic but it's composed of three elements so microsoft graph api the microsoft graph connectors and the
microsoft data connect so here in the middle we have graph api this is where you programmatically access various
microsoft services you have connectors these are ways of getting external data in to microsoft graph and then you have
graph data connect this is a way of storing data here so hopefully that makes it pretty
clear i do find microsoft graph a little bit confusing because there's a lot going on
here but hopefully as we work through these slides here it will make more sense okay
[Music] hey this is andrew brown from exam pro and we are taking a look at azure front
door which is a traffic manager traffic accelerator global load balancer and content distribution network cdn and yes
it does all those things and i think that what creates a lot of confusion for people when they first look at azure
front door uh but the idea here is azure front door is a modern application delivery network platform so the idea is
that it can do all these things so provide a secure scalable cdn dynamic site acceleration global https load
balancing for your global web application that's the definition that azure has
but uh yeah there are going to be services that have overlap with azure front door like azure cdn and things
like that but just understand that this is a very robust thing and something that you're
going to want to use quite often for your applications so azure front door features caching like cdns with rules
expiring policies resilience by distributing incoming traffic across multiple or different azure regions
cookie based session affinity for restful applications when traffic needs to be redirected back to the same uh
back end so that's for stateful applications health probes which are very common um at either the
dns level or the load balancing level but since it is both here uh it has it so to determine uh the healthiest and
closest back end to the client request it can have a waf attached to it so protecting your back ends
from malicious attacks and vulnerabilities i believe that is using azure's waf policy so you're just
basically attaching a policy to that there your url redirect so redirecting traffic based on a
variety of things so protocols such as http or https the host name the path the query string
not just that there's a lot of stuff in there url rewriting so with the powerful engine for rewriting incoming requests
to a different back-end request and so in the simplest sense this is what azure looks like you have a front you have
front-ends or domains where you're bringing stuff into
azure front door and then you have your back ends so that we can filter stuff there
there's actually a lot more components going on there we will talk about them or we'll have a diagram here to show you
but this is the basis of azure front door [Music]
well let's take a look at the core components for azure front door uh and the reason i specifically made this
graphic was because there was no real uh representation or example that i could find and so i thought it was very
important because when we start to talk about routing the path looks completely different from this it's similar but
it's different so there is uh i don't know why there's inconsistencies there there is azure front door classic but i
was using the latest version and so just understand that if you're confused i'm confused too but this
is uh the visualization of what i think it works when you're actually using it within the gui okay so the idea is you
have a profile and a profile as far as i can understand contains all the other azure front door components and then you
have this idea of an end point and this is the pathway from the front end to the back end uh and within your endpoint you
need to have some kind of origin so the origin is saying what to point at so what to point it at the back end and
even though it says origin origin here it's really origin group so you actually
assign origin groups to an endpoint and so it can rotate to multiple origins then you have a route and so the route
is where you can apply your rules and rule sets to determine more routing things then we have the ability to have
a waff policy before it reaches the back end and so hopefully that makes it a little
bit more clear um but uh one thing that is interesting when we talk about routing is that
when they talk about the order of data coming in they actually describe it as if the waf policies over here but then
when you look at the interface and give me stuff they put it on the right side so whether it is
um you know here or there i don't think really matters in practicality but uh you know i was trying to find that out
um and the way data gets into azure front door is that it's going to uh hit a edge location and then that edge
location is going to then uh trigger i don't know why it says trigger but the idea is to go to the front end it should
be more like match uh but i have this kind of like expressway or highway to kind of
emphasize that this is going to be very very fast to get to uh your azure resources so hopefully
that makes it uh pretty clear about the components there but it'll be crystal clear when we do the follow along okay
[Music] all right let's take a look at what tiers we have for azure front door
because the different tier is going to determine the feature set that is available to us so azure offers two
tiers for azure front door the first is standard or delivery optimized so the idea is you have content delivery
optimize offering both static and dynamic contact acceleration we get our global load balancing our ssl offloading
domain name domain and certificate certificate management enhanced traffic analytics basic
security and capabilities and that sounds like a lot but with premium we get additional security optimized stuff
so we get everything we get in standard uh but the idea here is we can uh have extensive security capabilities across
our web application firewall you can have bot protection private link support which i believe is
for internal routing of traffic keeping things within the azure network an integration with microsoft threat
intelligence and security analytics uh standard is really good for most use cases but if you need that additional uh
security that's where premium's gonna come into play okay [Music]
all right let's talk about how routing works in azure front door because there are a lot of components and it's
important to understand the flow and also the different types of routing that are available to us so routing is the
path an https request uh from the user will take to reach a
backend service configured in azure front door so at the start you have that http request from the user coming from
their browser most likely and then the idea is it'll go over the internet and find the closest edge location edge
location is just a a point of presence a data center or uh software hardware that is on on the edge
of a network that allows you to enter the azure network and so from there you are now going to be in the azure network
and so you will be matched with a azure front door profile from there it will then uh if you have waf rules it will
evaluate the web application firewall rules and if everything's okay it will then match you to an azure front door
route and then from that route it will then go through the engines rule to evaluate where to route things if there
is cached information it will immediately return that information right away um but if there's not it will
just proceed to the next step which is to send you to an origin group an origin group is just a grouping of origins
and an origin is how you send a request to a back end so you know hopefully we'll see these components over and over
again so it's very clear how they work together but let's talk about the four traffic routing methods that are
available in azure front doors so the first is based on latency so requests are sent to the lowest latency back-ends
acceptable with a sensitivity range you have priorities so requests are sent based on a user-defined number you have
weighted so these are requests that are distributed to back-ends according to the weight coefficient and session
affinity so requests from the same end user gets sent to the same back end so these are for state full-back and
stateful applications this kind of functionality for routing like if you're using aws you'd see this functionality
in rough d3 but in the case of azure we're seeing these within front door for this modern application delivery stack
okay so you know hopefully that gives you an idea but yeah there you go [Music]
we are looking at azure front door and it's time to take a closer look at the origin component so the origin is what
azure front door will point at to serve up to the end user and origin is the end point that points to your back end so
here's the example of the form of the information you'll need to fill out and the most important field here is the
origin type because that is what's going to say what are the supported backends or supported origin
so here we have azure blob storage azure storage for static website hosting cloud service yes there is a azure service
called azure cloud service i cannot remember what it does but it is a service we have azure app
services that's a place where we commonly deploy apps static web app
api management application gateway public ip address so that would just be pointing to an ip not
necessarily a service azure traffic manager azure spring cloud spring cloud is for java applications i
believe azure container instances so aci or custom so you just provide a host name to other fields that are very
important for origin is the priority and the weight so priority determines who to send traffic to first and this is going
to be a number between one and five the lowest number is the higher priority back ends can have the same priority
number for weights this allows you to determine the split of traffic distribution between origins of the same
priority so if you had a priority one and somebody was 50 50 then they'd get 50 50 split between them but this number
can between uh be between one to one thousand the default value is 50. and uh you know this is what it looks
like when you add an origin uh to an origin group so you can see the status is enabled the priority is one the
weight is one thousand uh but there you go [Music]
so we just talked about origins for azure front door let's talk about origin groups so origin groups are a collection
of origins and an origin has to belong to an urgent group so you can't just create an origin and let it float around
in azure it's always going to be assigned to one and
when you create an azure front door profile by default you will have an origin group called default origin group
and so here it is and here you can see we have a default route for whatever we set up for this um azure front door
so a few other things you need to know because there's more than just grouping it's the fact that you can apply things
to that group and the two things you can apply are health probes to check the health of your origins and load
balancing settings to balance uh determine the balancing of your origins and in order for inbound traffic to
reach an origin group an endpoint needs to be associated to the origin group via a route so there's a button in the um
in the interface so you'll see which is the associated endpoint to route it's that simple but we will explore health
probes and load balancing settings a little bit more just understand that they belong to the origin group and
those aren't just separate features that are floating around in the azure front door
ui or gui there okay [Music] all right let's take a look here at
health checks also known as health probes and that's what it will be called when you look at it in the interface but
in the documentation they'll call them health checks and this is a feature found under origin groups and then it
will apply to your origin so this allows you to ping a back end to determine if it has a healthy response
if there is a healthy response which would be determined by status 200 okay things will get routed to it if it is
not healthy or considered unhealthy then simply traffic will not be routed but instead rather to other healthy back
ends if there are other origins configured so you might be asking well what is a http response code just in
case you don't know what it is but the idea is that when a user sends an http request a http response is returned and
so for http responses they will have a response code to communicate how back-end servers interpreted the request
and a response code is a number that coordinates to what has happened and there is a lot of them so but i'm just
going to show you the most popular one so 200 for okay 403 for forbidden 404 for not found 500 for internal server
error i'm not kidding there's like 100 of these but these are the most common ones that you've probably seen before
but they all mean something okay [Music] all right let's take a look at load
balancing settings this is another option that is available under the origin group so this allows you to
define what sample set is needed to be used to call the backend as healthy or unhealthy so this is something you're
going to be using alongside with health checks or health probes so the idea is you have the sample size so how many
times uh or how many times do we need to consider how many of the sample size
have to be successful and the latency betw uh between those okay so the latency sensitivity with the value of
zero we're just talking about um this last value here that if you set it to zero it means that it will always send
to the fastest available backend else front door will round robin traffic between the fastest and the next fastest
back ends with the configured latency sensitivity so there you go [Music]
so one thing you can do in azure front door is assign custom domain names and actually you can assign multiple custom
domain names within azure front door to show you the form it's not too complicated but the idea is that
for dns management you can either use azure managed dns or another provider you are specifying the dns zone name for
the custom domain you're going to add that custom domain and you do have a couple options here for http where you
can have afd or bring your own certificate for enabling https and you can apply either
tls 1.0 or 1.2 of course 1.2 is newer so that's what you should be using but there's really not much to say about
that other than you can attach custom domains very straightforward very simple okay
[Music] all right we're taking a look here at endpoint manager so endpoint manager
provides an overview of endpoints you've configured for your azure front door not to be confused with microsoft endpoint
manager which is a security service specifically for managing external devices so devices to register your
company for organizations that are using outside of your office so no relation to that product just very similar naming so
this is an example of an endpoint that you will see in azure front door and the endpoint really
means like the entry point and so that's going to be the domain name so if you do not have a custom domain azure provides
you one azure fd.net it might vary based on um capacity and things like that but the idea is that it goes from domains to
the origin group to the routes to security there so you know hopefully that is
pretty darn uh clear endpoint manager lists how many instances of each element are created with an endpoint and the
associated status of each element will also be displayed so there you go [Music]
let's take a look at routes for azure front door so a route maps domains and matching
path patterns to a specific origin group so here uh the idea is that we can define what domain that we want to use
for this route we can say what patterns we're matching on so a wild card is going to match on everything is it going
to be treated as a redirect to https because if you have http you probably want to go there and you're going to
associate that with an origin group and you can provide an origin path and define the
forwarding protocol routes can also have caching and compression applied so this is where you would uh start doing that
and routes from rule sets can be associated to routes to apply intelligent
routing so that's where you get that routing association but yeah there you go
[Music] let's take a look at traffic acceleration for azure front door
sometimes no also known as global acceleration depending on your closer provider but the idea here is without
any requirements to make any modifications to your application code azure front door can speed up global
delivery of your application so azure achieves this traffic acceleration by directing traffic to the nearest edge
location to on ramp the azure to the into the azure network so the idea is you have
somebody that wants to connect they're going to connect to that edge location and an education is just basically a
data center a point of presence hardware on the edge of a network and then once you're in there everything
from here to the uh origin to the uh to the back end is going to be through the azure network and it's going to be like
being on an expressway so traffic that is following within the internal azure network travels at accelerated speed
while also taking the most direct path think of it as an expressway and think of
these edge locations as on-ramps so there you go [Music]
let's take a look here at rule sets for azure front door and this allows you to customize how http requests get handled
at the edge and provides more controlled behavior uh to your web application so the idea is that you have a rule set and
within this you're going to define rules gonna get my pen tool out here so you name it and you can provide a condition
and then an action and we have a lot of options here for things you can do so let's take a
look at what those options are for conditions we have a lot of stuff device type http version cookies arg strings
query strings a bunch of things we can do with the requests such as the body file name file extension etc so lots of
conditions available to us then for operators we have equals contains less than greater than and a whole host of
other even including regular expressions for actions uh this is where you really get a very interesting things but you
can do cache expiration so the idea is that the action is to expire the cache you can have a cache key query string so
the idea is you can cache every unique url so it's just a bit more complex caching you can modify the request of
the header or modify the response of the header that might be useful for something to do you can rewrite urls uh
so or sorry redirect urls so the idea is that you can send it to different types of redirects such as
a temporary one or move like 301 or 307 or 308 you can rewrite urls
you provide a source pattern and then a destination you can override the origin group there so a lot of stuff you can do
within those rule sets [Music] hey this is andrew brown and this fall
along we're going to be utilizing azure front door so before we do that we're going to need ourselves a storage
account to set up some static website storage so what i want you to do is go to storage accounts
and there is a static web app host hosting that we could use but we're going to do the old school way because
it's always great to learn a few different ways to do things in azure and so there's probably another follow along
where we use this static static site thing there so what we're going to do is create a new
uh we're going to create a new um resource group i'm going to call this one
my azure front door and we'll say ok and from here we'll
have to name it something so we'll just say my azure front door or sorry we'll do static storage
storage eight eight eight eight if you can't get the four eights because i'm using it then just give it a different
um number sequence because these are treated like fully qualified domains
choose like a us region so that we're all doing the same thing for performance we're going to stick with standard it
doesn't matter if it's on geo-redundant we're not doing anything fancy there so what i want you to do is go ahead and
hit review and create and then give it a moment and hit create and we're going to wait for that to finish deployment and
from there we'll then have to enable static website so it doesn't take too long to deploy so we'll just give it
a moment and while that's going we do need to create ourselves a
index html file so you're going to need some kind of editor so i'm just opening up visual studio code on my computer
just give it a moment there and i'm just creating a new file here this is taking it taking a moment
to load okay so and what we need to do is just create an index.html file these are
really basic you can find them anywhere online but this one's going to say hello mars hello mars
very very simple so i'm going to go ahead and save this to my desktop so save as
and [Music] show local maybe there we go that's a
little bit more sane and we'll just say index.html and we'll now go to the resource
and we're going to go to website static as a blade on left hand side so we can do
a stack website stuff we're going to say index html i think we have to set that for it to work i cannot remember if
that's the case we have a primary endpoint this is uh these matter for later but we'll come back to that and
we're going to go over to our containers and we're going to have to create a new container so i'm just going to call that
actually we don't because we're going to put it in web so when we turned on static website hosting it already gave
us a blob or container a container for us to add our ad or upload our files there so we'll go to upload here and
then i'm going to go ahead and select that index.html file and upload it in place
we need to change our access level for this uh this uh container to
um i think just blob so i'm just double checking here yeah i think it's blob access uh can
only be anonymous request container data is not available so we'll say okay i believe that is correct
okay uh sometimes you can like we're clicked into it but i think like in my instructions i do it a little bit
different so i'm just going to double check to see what the settings are there just so you can see it if you're
following my instructions so change access private blob yeah okay so that's fine um
so blob access should be okay and so now what we want to do is just test that our our page is working so if
we go back to static website here on the left hand side we can grab this primary endpoint and we should be able
to post it anywhere here because they have the old tutorial here this is a moment ago so that's why it's
a bit confusing but so hello mars is working but this is on static storage this isn't in front of or
behind azure front door which is what we'll want to do next so what i want you to do is just close a
couple of these tabs out we're going to type in azure front door our front door should be enough front
there we go and we have a whole bunch of options we'll just hit create that's a front door we actually do use for the
platform we have azure front door we have a quick create i always go custom i don't think i've ever done quick
create i just i just have more trust in custom and from here we will choose front door
and east us is fine i'm just wondering where our storage account is that's why we should always
just kind of set them to be the same place so let's go to storage account
i don't think it will matter but i'm just going to double check storage account
and this one is in east us so we're going to be okay just make sure they match just so we
have less problems and i'm going to call this one my azure front door
now there are more functionalities in the premium but standard is fine for us we'll go next to secrets um
this is if we want to add a certificate um like you could bring your own certificate um
but uh we're not too worried about that we're going to have to add a new endpoint so we're going to say my end
point here there's a lot of small steps in here so this should be fun
hopefully we don't configure anything wrong here and then we need to add a route
say my route and it's going to use the default domain that's totally fine
we don't have to do anything it's going to just be on the forward slash asterisk there
we do need to create an origin group so we'll go here and create a new origin group so my origin group and then from
there we need to add an origin yes it is very squirrelly going through all this
my origin the origin type is going to be azure or sorry storage static website and from
here we need to choose the right one we called it static storage uh this is fine this is all fine we'll
go ahead and hit add and down below we have some load balancing we don't care about any of
that so you can see status is enabled all this should be okay the protocol here
for the health probe should be http um actually let me double check because remember
this is where i ran into some trouble i believe yeah http i think it's https okay i
think i read somewhere that's like stack storage doesn't use http but clearly it does because we went to the
link earlier it was https so i might have the screenshots wrong but i've corrected them in the actual
instructions on my site we'll go ahead and save that i think it's https and then the the protocol here um we'll
just leave it matching it's totally fine origin path is fine we'll go ahead and add that there
and we'll go review create we'll give it a moment and we'll create all right so after waiting about a few
minutes like two three minutes it looks like as your front door setup so we'll go to the resource i'm just hoping this
works sometimes you have to play around with the settings but if you see the endpoint hostname we'll go ahead and
grab that there paste it on in and we get a 404 so something's not working just right
so we'll have to go do some debugging which i was hoping we didn't have to do so we go the front door manager this is
the same setup we have here before there are like different ways to get to it but we'll have to just kind of go
through and debug it so we do have my route my origin group so we can click into our origin group and then from here
we'll expand it and we have our route so there's something
that is not correct and that's what we have to figure out so i'm just going to double check my
instructions because this was a bit tricky to figure out um and it really came down to like these
protocols here so let me keep checking and i mean i think
the probe is working correctly so if we go in the origin if this wasn't working correctly then it
wouldn't it wouldn't show green so go back up a step here you know what maybe we should just wait
a little bit because sometimes it takes time to propagate and i don't i don't 100 trust that it's
not working because i feel like we configured it exactly right so i'll open up a new tab here
it says 404 still my origin group that's fine
we'll go here yeah so what i'm going to do is go back to our storage account i just want to
double check what that string was for the static website i'm pretty sure
it was https it is hps so there's no reason that should not work
this is what i was trying to uh spare you the uh the debugging of this and it's very common it's not just azure
anything that has a cdn sometimes it's it's difficult to figure that stuff out we'll go ahead and
add that route and we'll just carefully look at what we have here so
patterns match that is fine it's the correct endpoint both protocols are accepted that's
totally fine redirect https that's totally fine match that doesn't matter
so it should just work
the only thing we didn't do was enable caching which i think wouldn't hurt to do
because it is a cdn we don't take advantage of it if we don't turn it on so we'll do that and say ignore a query
string i mean that's not going to fix this problem if the routing's messed up again i'm just hoping that maybe it just
has to do with propagation i'll go back to the overview here and we will open this again
oops grab this let's just make sure it's doing hps
we're working to restore all services as possible right now what do you mean our services aren't available right now
let's go look at azure status page here and we'll take a look at front door
as your front door it's saying that it's fine u.s east
there's no green beside it good so we go down here i guess it's a it's a
non-regional service so if it's green it's green so you know i'm going to play around
with this for a little oh 404 web so this is better right it's just saying it can't find the
content well i mean i guess that makes sense because that was there so we'll just
clear that out oh now it's loading so i think it what the issue is there was no issue it's just
propagating to all the servers and it took some time so i think that's what really threw me for
a loop um when i was originally doing it so just give it some patience and it will work eventually so we are all done
here so we can go ahead and clean this up and i'm just looking for the resource
group here we'll go ahead and delete it and there you go that's azure front door and i'll see you in the next one okay
[Music] let's take a look here at application insights and this is an application
performance management service so it's an apm and it's a sub service of azure monitor let's talk about what apms are
so these are monitoring and they do monitoring management of performance and availability for
software apps apm strives to detect diagnose complex application performance problems to maintain an expected level
of performance so why use application insights will automatically detect performance anomalies and includes
powerful analytics tools to help you diagnose issues and to understand what users do with your app designed to help
you continuously improve performance and usability works for apps4.net node.js java python hosted on on-premise hybrid
or any public cloud and i know that it works for other ones that are not part of the supported languages so there's
one for ruby but the thing is is like azure's only providing official support for a bunch of languages but you might
still have libraries out there for them and it integrates with your devops processes and can monitor analyze
telemetry from mobile apps by integrating with visual studio app center and if you're running an app you
definitely want to have an apm installed if you ever use data dog or skylight or new relic that's what apms are or what
those providers offer but let's take a closer look here and this is kind of an example of an application and what
you're going to see is that we have a front end of backend and workers just to kind of represent
how you can instrument your applications and when we say instrument it just means install like this piece of code that
runs on it to send data back to application insights and the idea is that when you instrument
it you're just installing the instrument package sdk or in some cases you can just turn it on where it's supported so
you don't necessarily have to install an app you might just press a button and it will
install it within uh azure services and there's many ways that you can view your telemetry data so the idea is that
the agents send that information to application insights and then you're gonna be able to leverage that in alerts
power bi visual studio rest api continuous exports a lot of a lot of services can ingest application insights
so apps can be instrumented from anywhere so if you're running on aws you can install it on your servers there
when you set up application sites monitoring for your web apps you create an application site resource in
microsoft azure so it's a physical resource and you open this resource in the azure portal in order to see an
analyzed telemetry collected from your app and the resource is identified by the instrumentation key also known as
the i key and i just got a big old list here just to tell you all the things you can do with it so what does application
insight monitor well request rates response time failure rates dependency rates response times failure rates
exceptions page views load performance ajax calls user and sessions count performance counters host diagnostics
diagnostic trace logs custom events metrics uh so there you go that's a big old list
and on the right-hand side is where can i see my telemetry and we saw a little list but let's go through the big list
and that's smart detection manual alerts application map profiler user usage analysis diagnostic search for instance
data metrics explorer for aggregated data dashboards live metrics streams analytics visual studio snapshot
debugger power bi rest api continuous export and there you go so you can see that it can collect and you can use it
in a lot of places definitely definitely install if you're using are you running a web application
[Music] let us quickly talk about open telemetry so open telemetry also known as otal is
a collection of open source tools apis that's the case instrument generate collect and export telemetry data
so otel standardizes the way telemetry data metrics logs choices are generated and collected
it uses a thing called wire protocol which refers to a way of getting data from point to point
um an application insight supports otel as an alternative to azure analytics sdk for instrumentation so open telemetry is
very popular in the cloud native space and kubernetes um and it probably is the future of metrics so
it's a standard and so we're seeing adoption top not not just by azure but by aws and
google cloud so uh it is worth knowing about this alternative method
will show up in the exam maybe not but i definitely want you to know what open telemetry is okay
[Music] let's talk about instrumentation for application insights because if you
don't do this you're not going to be getting data to your application insights so you instrument your
application by adding the azure application insights sdk and implementing traces it supports a
variety of languages.net java python node.js javascript no ruby i really like ruby support please
so you do npm install application insight save that's the javascript example here i'm requiring it we're
doing some setup and then from there what you can see with the client is we are tracking an event we're
tracking an exception we're tracking a metric we're tracking a trace we're tracking dependency we're tracking
requests so there's a variety of things we can do but you know there is auto instrumentation we'll talk about that uh
soon but um you know this is the idea generally is that this is what instrumentation looks like okay
[Music] so wouldn't it be nice if we didn't have to instrument our app that it would just
work and that's where auto instrumentation comes in it allows you to enable application monitoring and
application insights without changing your code and it's supported for
a variety of different things so support's going to vary and you'll have to just come back and check if this is
even true still i mean i did just get this table but you know azure moves really fast let's take
a look at what i'm looking at right now so across the top we can see net.net core java node.js python and
there is a term called onbd which means on by default so another thing is going to vary on
based on service so for azure app services pretty good support across the board except for python which is
surprising for linux not for net because it just doesn't let you have it and not
for python for azure functions we only have it in public preview for java for azure spring cloud
uh it's only for java again which makes sense because spring is a java thing so
it's these will never get supported here they should really say n a azure kubernetes service so only for net core
and maybe through the agent uh virtual machine windows on-premise virtual machine
um windows they're the same here and then for standalone agent we have generally available for java so uh not a
lot of support but i mean there is some so you know if you're in the dot-net world or in the java world looks like
you're going to be in really good shape but there you [Music]
go hey this is andrew brown and in this follow along what we're going to do is learn about using application insights
so application insights allows us to get visible data about how our application is running like distributed tracing and
so we're going to deploy some kind of workload and then apply application insights to it so what we'll do is go
ahead and go to app services and deploy a new uh service so let's just type in the top your app services
and we'll go here we'll go ahead and create ourselves a new app service and i'm going to make a new resource
group here so it's easy for us to clean up later so my app insights and then from there we'll need a web
name so we'll say my app insights 8888 if you cannot get this name just change the values to something that works for
you because these are fully qualified domains it's like registering domain names so make sure it works for you
we're going to publish code here today for the stack we're going to choose no just to make our lives super easy i'm
choosing 16 lts if there's a newer version should work fine
no does not change that much in terms of versions but you know depends how far in the future you are
in terms of region uh canada uh sorry central u.s is fine it's always chooses by random i'm in
canada but i'm going to stay in central u.s because i know that the things i'm going to be using will be available
there in terms of the sku in sizes greatly affects the cost we'll click into here because we really do not want
to pay much at all so we'll give it a moment to load it'll take a few seconds not well it usually
doesn't but for right now it certainly does so we'll give it a moment to load i'm going to make my way over to dev
test and we're going to choose the cheapest option here which is going to be f1 so
you can see there's not a whole lot it's just memory storage you know we don't get custom domain we don't get manual
scaling we don't need that stuff we need it nice and simple so we'll go ahead and choose free f1 that sounds good to me i
like the word free we'll go to our next step which is deployment in fact i don't think anything has to be
changed under deployment but we'll click it here not sure why azure is a bit slower today
but that's just how it's going so we'll give it a moment to think and we will wait
wow it is really really slow here right now okay so for deployment um i mean you could hook hiccup hook up github actions
and things like that i don't think we need to do any of that so i'm going to make my way over to networking
and we do want to inject oh sorry not networking monitoring because we don't need networking
monitoring but we want to enable application insights this is going to set up some stuff for us um so it's
going to set us an application insights what name space i think it'll set up something for us so we don't have to
worry about that so we'll go ahead and hit review create and hit review and so now we just need to wait for that
to deploy so i'll see you back here when that is done okay after a very short wait maybe like a
minute or two that has been uh completed so we'll go ahead and go to that resource and i just want to show you uh
over here in the configuration tab the environment variables that are going to get passed
to our application so if you go here these are the environment variables that get passed along and notice we have one
called application insights connection string instrumentation key and the reason this is important is because we
don't have to um in our environment set it so like since we're using app services and we enabled
application insights it's already going to set those values and the great thing about the sdk is it's going to pick up
those values so we don't have to explicitly set them at least i don't think so if we do set them we'll use the
environment variables i haven't decided as of yet for application insights um it set us up a
i don't think they call them workspace name namespace i don't really care what the name is called
but the idea is that we have an environment set up in order to gather information so what
we're going to do is just go back here and notice down below it says node is in progress agent is enabled so just
indicate that that is working as expected that's a good message to see not that we'd ever see any other message
there and so now that uh we we know that everything is deployed or
at least our environment is ready let's go ahead and create a new application so what i'm going to do is make my way over
to github here and we are going to create ourselves a new repository now i don't know if i have an old one sticking
around here so hopefully i don't but we'll see here i'm going to say my app insights would be something i would
name it and so i already have one there so what i'm gonna do is i'm just gonna make a new tab here
okay i'm just gonna go to uh anyone here this is my marketing website i'm gonna go my app insights i'm just going to
rename the old one so it's out of the way so you can follow along exactly the same
way as me here i'm just going to do settings we'll go down here we'll just say old
all right and we'll rename that and i'll close that out and so what we'll do is go back here and this this
is fine now i know that it says it's not but i know it is we're going to add our email file we're going to go down to get
ignore and add node i might add i might have done this after the fact so i might have added the steps
out of order here but uh that should improve that there i'm just going to do hyphen old i'm going to
take that out there there we go and we'll go ahead and create this repository
so um i want you to get getpod installed so gitpod is a really great
great service for having a cloud developer environment it has a generous free tier so there's
no reason not to sign up and really this is the future of cloud development so make sure to go get a get
pod account but you really don't have to because when you launch this environment either launches it temporarily
or you can connect your account and then persist that data but um anyway if you install the getpolychrome
extension you'll get this button but you don't need to install the chrome extension all you have to do
is you just need to put this in front of your address so this is the current address we're at like this is the
current repository whoops this is the current repository link and
all this button is doing uh copy link address is just putting this part in front here okay so this is
going to open up a git pod environment this launches extremely fast and we're going to create a new node.js
application and deploy it to app services so we'll give it a moment here
doesn't take too long here we go and what we'll do is we're going to use the mpx uh express
generator to generate a new express express application express.js generator so we'll type that in there
we'll say yes and it's going to create us a skeleton app just say yes just say yes all the
way down we'll get a bunch of stuff in here and then we'll do our npm install you can do npm installer npmi if you
want to make it nice and short we'll give it a moment there okay and then we'll make sure this works
we'll type in npm start this is going to prompt us to open our browser so we'll say open browser
so express yes that application is working up this link up here so this is our application but it's not
instrumented to send any data so that's what we need to do so the next thing we're going to do is just kill this
control c uh yeah control c on my keyboard and we'll type in npm install i'm typing
application insights hyphen hyphen save and what that's going to do is install that
dependency for application insights so if i go over to my package.json here you can see that it is a dependency that is
installed so now we need to configure that so what we'll do is go over to our um
app.js file and so we got to insert this somewhere here um so what i'm going to do is i'm going
to go all the way to the top that's probably a good place to put it and we'll say let i like how it says all var
var is really old app insights equals require double quotations or single it doesn't
matter it's not going to hurt either way application insights and that will define our variable here the next thing
is we need to set it up so we'll say app insights dot setup and then we need our
connection strings that's where we're going to put this thing here our connection string
and then we need a bunch of other things so i'm not going to type these all out because there's a lot of them so we'll
just say application insights instrumentation node.js
and i'm sure the documentation is going to help us out there so go here i'm just scrolling for the
usual configuration here it is that's all i've been typing here this entire time that's probably where i got it
originally so we'll go ahead and we'll just paste that on in there allow take out the first line that way there's
no spelling mistakes and so we have auto dependency correlation auto collect requests a
bunch of interesting stuff here it's going to do distributed tracing we don't need the semicolon we'll just take it
off there for fun and i'll save that and so now the next thing we need is the connection string in order to make sure
this application works which is going to go here so we'll make our way back over to this tab and the connection string is
actually in the configuration that were earlier here in this blade so give it a moment to load
there we go and so we want the connection string so we'll click that i think before that you could use the
instrumentation key but now connection string is what is preferred um i want to copy the value okay i just
clicked into it to make it a bit easier for me and what i'm going to do this is really dirty but i'm going to just paste
it in here and we're going to just do it in a few different steps just to make sure that this is working correctly
before we switch this over to environment variables and one other thing that we need to do
if we want to actually get any data is we'll have to turn send live metrics to true now if you are doing
instrumentation you might not want live metrics i don't know if it costs more but for our development purposes it's
not a big deal to turn this to true so that is now enabled and uh you know our dot get ignore should have a bunch of
stuff in it like node module so we're not committing that stuff i'm going to commit this now it does have the
connection string in here i'm not too worried about it because we are going to um delete it out
later on so we'll just say um save my changes but i am pointing to pointing out to you that we are
committing this string here which is bad practice right because if somebody got that then they
could mess with this but again uh we're going to tear all this stuff down so it's not that big of a deal and
so we should be okay so now that this is installed what we're going to do is do
an npm start that's going to start up our application and then we're going to go open in the
browser again so here's our application now we'll go back to application insights
or not application sites but into our app service our actual application on the left hand side there's going to
be application insights and from here what we can do is click the view application insights to go to the
workspace the namespace whatever they want to call it and this is the way we're going to be able to see some
information so live metrics on the left hand side will show us data in real time so now it says it's connected to your
app so we'll go back here refresh so the app connects and we should get some data
there we go so these are the requests we're making right here so i hit enter enter enter enter enter enter enter we
go back here here's those requests coming in it's giving a 304 for the css um so yeah mostly yeah there's our
stuff so we know that it's working because we can see information there so that's pretty darn straightforward
now let's go back to our application set this up properly because we really shouldn't be setting our string like
this this is pretty dirty so what i'm going to do is just delete this out here and i'm just going to type in um
process.env and then we're going to say we'll match the name of what it's
supposed to be called in our application so i'll click back to make it back to our actual application
we'll go to configuration and from here i'm going to just go ahead and grab that string string name i just
clicked into it i just don't want to type it wrong and we'll go ahead and paste that in there and so this is what
we are going to set and just did ctrl c to kill there but this is what we're going to set as our environment variable
so i'm doing gp gp stands for git pod it's a way of setting environment variables and we'll do double quotations
always do double quotations especially with connection strings when setting environment variables so part of the
string does not get cut off because it will cut off at the first forward slash and then you'll just end up with like
that and that's not enough right so i've copied that i'm going to go back here paste it in we'll hit enter and so
that's setting the key now if i do grep or sorry env so this show like this prints all the environment variables so
env pipe pipe is just a vertical line grep and i type in connection any part
of the string notice that it's not set yet so just because we said gpenv does not mean it's set uh what you have to do
is restart the environment but we can temporarily set it export this is how you'd set it in bash you type export and
you'd give it the the name here like this and you do equals double quotations
and this will this will make the environment work but the only problem is that if we restart this uh workspace
we'll lose that on the next time we restart it so this will definitely work but
let's just do it let's just restart the environment anyway so we have process e and v up here but we have to import that
as well so i'm going to say let process equals require process process is just a way of getting
access to the process process being the application that's running to access the environment
variables that are set which are getting passed along to the application by app services
so that is good there and so i think we are in good shape the
only other thing is i want to generate a git pod file so i'm going to just do gp for uh git pod and we'll just set some
things up this is going to save us some time when we relaunch our environment so i'm just doing uh pipes here
uh on the end here that allows us to do multi-line even though we're only doing a single line we could probably emit the
pipe if we wanted to but i'm just putting it in anyway and so
we'll just indent there coloring usually tells us something's wrong and this starts up on port 3000. i want it in the
browser i don't want to do preview that means opens in a little window here that's
kind of gross and so this looks good i think i accidentally removed a line here because
we'll say tasks up here and so now we have our get pod uh uh yaml file so we'll go ahead and add all these files
here so um uh instrument i think that's everything we need to do
there so we'll just go ahead and commit that we'll sync our changes
and i'll do a get push just to double check double check that this stuff was synced and
committed so if you go back to our actual uh get or github if you go context it just brings us back to the
github repository that gitpod yaml file set so we know that our changes have been committed
highlighting looks fine and this is green this is this kind of looks blue so hopefully that works but anyway what
we're going to do is go ahead and open this again sorry we'll close this
tab and we'll reopen up and get pod so this will launch a new workspace if you still
have the old tab open it might ask you to physically press a button called new workspace but generally it opens a new
workspace so it's setting up a new container completely for us so give it a moment it doesn't take
too too long and what we'll do here is
wait for the terminal to open okay notice it ran the server did npm install did everything for us i'm just
going to open a new terminal here on the on the right hand side and bash whoops well i did not mean to split it but
that's okay i don't know how to unsplit it so just drag it around that doesn't matter
yours will look a little bit different than mine and so all i want to do is do env grep
um con because i want to see that connection
with set we'll do two ends so there it is application insights connection string so i'll just go ahead
and close this one i'm getting a bit confused here with that open and our app should still work if we go on the left
hand side here to remote explore it didn't give us the prompt here notice that this little red thing says all
right you're blocking this pop-up continue blocking no always allow us to pop up on this
and i'll just click the uh a little world button to see that it's running so it's still working we're going to go
double check to make our instrumentations working with our new connection string that's being passed
through as an environment variable so on the left hand side go back to application insights
and we'll give it a moment here below we'll go back to viewing the data on the left hand side we're going to go to live
metrics it says it needs to reconnect we'll hit refresh here if it connects then it must
be working so refresh refresh refresh go back the data is coming in so therefore it is
working so we are in good shape here so um our application is configured correctly but it's not actually deployed
um because we are passing the invite like we're passing our own environment variables in so let's actually get it
deployed to um app services so what we'll do is
click back to get back to our app services which is here nope oh yeah this is it
says app service that's good on the left hand side we want to deploy it so we'll go to deployment center
and what we'll do is we'll choose the code and we'll go to github and from here we need to authenticate so omek's
my personal account that's just the way i get to it and so i need to select an organization it's exam pro i need to
select a repository this is going to be my app
insights that's the new one we'll choose the main branch it knows what the runtime is
and we'll go ahead and save that so as soon as we do that it should automatically start deploying
it's using github actions as the build provider so if we go back to the repository here we go to actions we can
see we have one queued up and so this should run on its own we don't need to do
anything so all we're doing is waiting for this to deploy
and once it's done then our application should be deployed to
the our url and then we'll check to see if it works and if it works then we are in good shape and we're done the basics
of application inside so i'll be back here in a little bit okay all right so it's done i was waiting
here for quite a while but i think it was done in a few minutes you just have to hit the refresh here so you know
don't be afraid to go up here and hit enter and and see if it's deployed but now that it says it's deployed i mean
this url's on git pod so that's no good for us what we'll do is go back to our application all the way top to the
overview tab or blade i should say we'll go to browse and see if it's deployed so we're hoping we see express
js the first load might be a bit slow so we'll give it a moment to think it sure is thinking hard
and we'll just close it again and reopen it we are using the free tier so i'm not
sure if it's like spinning up the instance that's probably what it's doing it's probably um
spinning up uh the whatever the underlying environment is to get it running so it is running here now the
next question is is it tracking so we'll go over to application insights and we'll go back to viewing our insight
data and we will go all the way down to alive metrics
it says it's not connected we'll hit refresh give it a moment here see if it finds
any data we'll hit enter there we go so we're having a couple
requests in there so uh we have application sites working and it's deployed so we're in pretty good shape
here we're going to keep this environment around because we need it for a few uh follow follow alongs here
but if you want you can tear this down but you'd have to do all the setup again but you can see it doesn't take too darn
long but there you go [Music] hey this is andrew brown and we're going
to take a look at sampling with application insight so this is a very short follow along but very useful for
you to know so we have this application and what i want you to do is open it up in git pod you should know by now how to
open things up and get pot and we'll uh give it a moment here and we're going to configure this to do sampling so
sampling is when you don't send every single trace to application and sites or every single
request information and the reason why you'd want to do this is basically to save money because if you had
all your data going there it just it's just too much information right so um and that cost will stack up
a lot faster so you know applications might send only 50 requests only 22 percent of requests it's up to you to
decide how much data you need to have something that's accurate but you definitely don't need all of it so i'm
taking the start off the end of here and i'm just going to have app insights start here so it is set here
and then what i'll do is do app insights app insights i think we should do before
i think in my code in the code example i have it in the wrong order but i don't think it even matters but we'll do
client config it might matter actually sampling um percentage
equals 50. so we're saying only sample 50 of the time i'm just going to double check make sure that spell right app
insights default client config sampling percentage equals 50 and so now if we if
we set that there and i'm not going to uh i'm not going to test it in the albums go test it and get
pod here but what we'll do is we'll go back to our application into the application sites tab
view that we'll go to the um live metrics and if we go back over here and
stop our application start it back up okay and we go here this should connect
and so what i want to do i don't know i'd say it's not supported it's definitely supported we'll hit one
two three four and i'll go back and notice that i hit four but i only
have two requests so that's why we're only getting 50 of it because it's going to cut out 50 50 of
the requests and i think yeah it should always do that so that's all i wanted to show you
because that is something that is a very basic configuration that you should know for application insights uh but we're
going to keep the environment around because we have a few more things to do but yeah there you go
[Music] hey this is andrew brown and we are going to look at usage analysis with
application insights so usage analysis allows you to get more rich information about the usage of your application so
i'm just here in application insights for our app that we've been using if you go to the left-hand side and scroll on
down you'll see this usage tab so we have users sessions all this interesting stuff it allows us to instantly create a
cohorts which are extremely useful for visualizing information if you've never seen a cohort before like user cohort
i don't think it matters what cohort we look at but i just want to show you an example of one
so this is a cohort and the idea is that if you have an application you want to be able to see whether users are coming
back every single day right so you go okay this person was here on day one and the same person came back or 23 of
people came back in the second day uh 18 of the people came back in the third day and it gives you kind of a map of um
of course very or like information about your user behavior so very very important very useful for your
application and application insights allows you to collect that information now um what we're going to be doing here
is we're going to be instrumenting uh the client side because uh we never
we instrumented the server side when we installed it into the our back end server code but we never did in the
front end so that's the first thing we're going to need to do so i just need the the code snippet to do that so um
what i'm going to do i'm going to go back to our repository because we're going to need that open here so we'll go
back to my my app insights i'm going to open that up and get pod while that's going i'm going to type in application
insights insights instrumentation node.js and we should easily be able to find
the code for this here i want a client-side though i'm just looking for it so because
there's all sorts of pages here so it can be a bit well let's say usage how about application insights
um usage usage analysis here's the page and that's the code we want so it's on this
page here this is an analysis with application insights so i'm going to copy that code and that's going to go
into i'm going to make a new workspace here that's going to go into the front end of
our website so somewhere in the index.html or if they're called jade files and
it'll go into there so it is a uh oops jade file it's just opening the browser for us here so what i'll do is
go index jade is just this syntax language that is used by um this configuration of express.js i'm
going to go ahead and paste this block in here and we actually have to format this a bit because jade is not going to
like this so what we'll do is type in script here at the top this is how this language works for javascript tags we'll
do parentheses type equals sql quotations text javascript
okay and then from here what we'll do is just go ahead
and select this okay and i'm just looking where we should put
it so they probably want it in the head so we'll put it right here and
um yeah so that's fine that looks good to me we're also going to need
um to create some kind of cohort so what we'll do i'm going to just commit this
add client side tracking okay and so we'll stop our app start it back up
okay and back over here i just kind of want to show you where you can set up we're
not going to be able to see a real cohort because we need multiple days to collect information but let's just kind
of play around as if we do have the data so over here on the left-hand side we'll go to sessions and it should tell us how
many sessions we have now there was no information showing it before but now we're seeing two
okay so um i'm not sure if it's counting both the server side and the client side but we are getting data so that's good
and if we went up here to create a cohort we have a template gallery so we could
choose a variety of different ones so we might want to try a blank sessions cohort sessions are people that are
connected to the application users are identified users um and so you know we can add parameters
to filter things down but we don't have to do anything we can also like run the query to see what information we get
back but we'll go ahead and just save this cohort so we'll just say my cohort whatever you want to name it
doesn't matter because we're not going to see anything interesting anyway and so this is our cohort but you can
see there's nothing interesting so if we want to see something that kind of looks like a cohort what we
can do is make our way over to workbooks i think yeah over here
i'm not sure if this is just the same thing uh but in a different place but if you go down below here they have usage
like active users and things like that i'm looking for retention so maybe cohort analysis
and so down below you can kind of start seeing a table here right of what that looks like so i find the workbooks a lot
more useful than the usage tabs but yeah i just want you to know about this usage analysis because it's super useful
that logins or application insights has that functionality so you know hopefully that is good enough there but that is
all we're doing here so i'm just going to go home close these tabs
and i'll see you in the next one okay [Music] hey this is andrew brown and this follow
along we are going to do a bit more with application insights by putting in custom events which is very useful for
tracking so what i want you to do is open up application insights for our project that we've been working with
we'll also have to go back to our get github repository and open that up in git pod as uh you should know how to
open git pod by now and we'll give it a moment there to load and we'll enter in some code
so we do have server side client-side tracking we did the client-side tracking in the uh usage
um the usage section there but if we expand this here we're going to want to put something in the index jade file and
so here what we will do is add a new script tag so that's going to be script this is jade language i don't care if
you learn it but you just got to follow along for this purpose coding is not the lesson here but we
have text javascript that allows us to do some javascript there and i don't know why it's saying tab when i type
that is not very useful for me but i'm just going to hit do space space and type in app
insights track event we'll auto complete that there we'll do parentheses curlies
name double quotations client count and so every time this page loads it's going to do track event with this name and
that's going to count up it's called client count that's how we know it's on the client side so that's the index page
there and so now we need to do server side so there's a few things we have to do here
we've got to go to app.js and in here we need to set a variable that's going to get passed along
so we'll go down below here and do app dot set parentheses app insights and this is specific to
express.js so again i don't care if you remember how this works but just a way that we can pass a variable that we're
going to access in our routes so we go app insights default client and
i'm going to make sure we have that yeah it's up here so we're passing this in here so that's good
we'll go over to our routes and in our route center index page we need to add
app insights we'll do app insights and we're going to grab it from that app variable we just set so we do
app request app get app insights and then we'll track it it looks exactly the same as the other one we are using
javascript on the front a front end and the back end so it makes things a little bit easier and this one's going to be
called name double quotations single quotations doesn't matter
server count okay so now we have it server-side
client-side i'm going to go up here just commit my code client and
server-side tracking we'll commit that and what we will do is we'll do an npm
start to restart up the application now that we have both sides of it we do have a problem here on line eight of index uh
index index jade file so we'll go here take a look at what it's complaining about
um does not like something so
i'm just taking a look at my my code off screen here it's lining it it's this line it doesn't
like so why not um
hmm i mean i don't see a problem so what happens i'm just going to copy
my my my oh you know what it's indentation see where it says block content
it's supposed to be this level here that's probably the problem so we'll stop it we'll start it
we'll give it a second it's still crying it's a big old crybaby here but that's okay we'll work through
it now i hit tab it's actually tabbing properly because i wanted to do tab
before i don't know why it was uh doing that still doesn't like it
so i do have the code off screen here so i'm just going to copy paste it i mean you'll have if you're on my
platform you'll have it so you just say allow oh there's a period on the end
of course can barely tell that was there okay so we'll save that again i don't care if you learn
j-templating it's not a skill you have to remember so that is there i'm just going to keep hitting
enter because i want to observe this custom event data so we go back over here into
app insights and we should be able to see this somewhere um
this would be under where this would be under somewhere i'm trying to find it
it would be under which blade huh i didn't write it out of my
instructions so we'll have to click around here to find it um
i think well it's event data so it's probably
under events yeah that's where it is now i'm remembering
and we what we can do is filter this down and say any custom event here if we
scroll on down we'll go view more insights i don't know why they have a little button there to get that
information but we scroll on down we have server accounts the server account is definitely working
we don't know why we don't see client count so what we'll do is go back here and we'll right click and see if there's
any errors there is an error app insights is not defined and so it should be defined in
the head because we have this script from before right
you know what we didn't put the instrumentation key in so that's not good because in the last
follow along we were supposed to do it but it didn't really matter because we didn't have any data to look at so
that's why we didn't notice so we'll go back to our overview and we'll grab the instrumentation key
here go back to our code and we will go
up to app.js and this is client side so it's a bit harder to
pass it in as environment variables we're not even going to try to do like to do that because it's the
instrumentation key it doesn't matter and we're going to go back to the uh
layout here and down below we'll just replace that okay we'll stop start it
hit refresh right click to see if it's working now let's give it a refresh here
still saying app insights is not defined which is not true but there is an error here we'll click
and do it it's not showing me the line which would be nice
okay so we'll go back here and there's something wrong here it doesn't like something you know what it's probably
missing the period there we go i can't believe we never noticed that
before so fixes i'm very confident that is the fix
so we'll go back here inspect refresh already lost that let's do inspect again
no errors here that's good we'll hit refresh refresh refresh refresh refresh refresh refresh we'll go back to
application insights and we will go back to events
if we can find it there it is and i'm going to go down to any custom event scroll on down
more insights and it might just need some time to propagate because we're not getting any
errors now so it must be working if it didn't track it it would give us an error
client client count so it is showing up so it is counting now so that's how we do custom event tracking so there you go
and for the most part i mean the only thing we didn't look at was applica application map
take a quick look here this would show you like connections in between stuff so here we have like the
web to the client if we set up other things like a database and stuff like that we would have something more
interesting that's more of advanced thing and you know maybe i'll do that in
another follow along but this one is done so let's go ahead and clean this all up you can delete this repository
it's no big deal if you're worried about the connection string you can delete that whole yeah the repository but since
we are tearing everything down it's not going to matter and i know we replaced it as an environment variable but
remember it's in the git history so someone went back it got access to your github repository it went back in time
they could find it but um what i'll do is i'll go to resource groups we'll look for our app insights here
and we will hit delete resource group we'll say my app insights go ahead and delete delete and there's our cleanup so
there you go that was application insights [Music]
so azure monitor is uh quite a beefy service there's a lot going on in it and so it's a comprehensive solution for
collecting analyzing acting on telemetry data from your cloud and on-premise environments so here's an example of one
of the things it can do which is provide you a visual dashboard but it can create smart alerts automate actions log
monitoring and a lot of different things and many azure services by default are already sending their data to azure
monitor so you can use it right away [Music] now i just want to introduce you to the
concept of the pillars of observability this isn't a thing specific to azure but it's something that in devops that you
need to understand and it's just going to help you contextualize the offerings of azure monitor and other cloud service
providers so what is observability that is the ability to measure and understand how internal systems work in order to
answer questions regarding performance tolerance security and faults with a system and application and in order to
obtain observability you need three things you need metrics logs and traces and you have to use them together using
them in isolate does not gain you of observability so let's go define those three things the first thing is metrics
and that is a number that is measured over a period of time if we measure the cpu usage and aggregate it over a period
of time then we would have the average cpu metric then you have logs these are text files where each line contains
event data about what happened at a certain time and then you have traces and this is a history of requests
that is that is that travels through multiple apps
and services so we can pinpoint performance or failures and i like to make the joke that it looks kind of like
the triforce of observability once you've constructed it there at the end [Music]
let's look at the anatomy of azure monitor which is a little bit complex but i'm sure we can work our way through
it so the first thing is that we need uh sources of data so these this is what uh what data can be sent into azure so you
probably want application data operating system data uh data from azure resources at your subscription level your tenant
level which is going to be associated with active directory and custom sources and once you get those into azure
monitor you have to store them somewhere you're going to put them in logs and monitors and these are just data stores
that are within azure monitor that you're going to be able to work with and once you have that data in there
you're going to be able to leverage different services and do different things
and so we will call these functions so you can perform insights visualizations
analysis response and integration so we'd look at insights uh we're getting insights into our virtual machines our
containers our applications uh for visualization you might be making dashboards you could be using power bi
or creating workbooks for analysis you might be using the log analysis or the metric analysis tools for responses
you might want to create alerts or start auto scaling and for integrations you might want to use logic apps or export
apis to connect things up so there you go [Music]
so now let's take a closer look at all the different sources because this is going to really help us understand the
utility at each level and i think that makes everything very crystal clear so remember that list of
sources where we went to application to custom sources we're going to start at the top of that list and look at
application code which is all about performance and functionality of your application and code so that's like
collecting traces application logs and user telemetry and so here is the visual representation
and so on the left-hand side we have our sources in the middle we have our storage and the right-hand side we have
our our services that we're going to be doing things with that data so looking on the left you're in your application
you'll probably want to go ahead and install the instrumentation package this is going to allow us to collect data
into application insights and application insights is a application for
getting rich data about our applications then you have availability tests this tests the response responsiveness of
your application from different locations on the public internet and that's really useful if you're
saying like if you have your application you're running it in the east of canada and the west of the
us and you want to make sure that the response times are the same or lower that's going to help you get there
for your metrics this is going to be describing your performance and operate and operations uh and custom metrics of
your application for your logs you're going to be storing operational data about your application such as page
views application requests exceptions and traces and then the things you're going to be putting in storage is you're
going to send the application data to your azure storage for archiving or what you'll do is you'll store your your
availability tests if you need to go and analyze them further or you could create a debug snapshot of
your data so that you can then go debug that at a later point okay [Music]
so let's take a look at how we're going to monitor our operating systems and this is for the guest operating system
not the host operating system uh so when you're dealing with virtualization um uh the vm has its own os but the actual
underlying hardware has its own os and that's the host one and that's not what we're looking at
you don't need to monitor that one that's up to azure to do or your cloud provider we're looking at the guest one
the one we actually can control so the operating system is going to need a couple of tools installed so or agents
so we'll want to install the log analytics agent so we can use that with log analytics we'll probably want to
install the dependency agent that allows us to monitor processes on the machine so those are the programs that are
running so like you know let's say mysql redis maybe you have a ruby on rails app whatever
um and so that's that and i just want you to know that these agents can be installed anywhere so it
can be installed on azure on premise or even on other cloud providers like aws if you want performance counters you're
going to have to install the diagnostic extension which is i think a good thing to do
and just to make note the log analytics agent is going to store that data into logs then you can use that log analytics
later if you want to have the or store store the health state
information then you're going to be using the azure diagnostic extension and they're going to put that in azure
storage and if you want to use azure event hub so event hub is a way of connecting your
app to other destinations you're going to stream it but you're going to need the diagnostic extension
for that all right [Music] let's take a look at azure resources and
how we're going to monitor those so um you're going to have resource logs and these provide insights to internal
operations for azure resources and these logs are automatically created for you but you will have to set up
diagnostic settings to specify destination for them to be collected for each resource
for metrics you're going to get those automatically you don't have to do any
additional work to configure them and you're going to be able to analyze those in metrics explorer
for your log data you're going to use log analytics to look for trends and other analyses
you can also copy your platform metrics to logs for analysis and uh your log resource logs are going
to be archived with azure storage for long-term backup if you want to uh
use um or or send or trigger uh or send your data to other destinations you can use event hubs that's generally what
event hubs is for so there you go [Music] taking a very quick look at how we
monitor our azure subscription this is going to be for uh the the service health
of different resources that you're using like are they okay are they running and things about azure active directory so
that's about it [Music] i'm going to take a look at
our azure tenant and how we would monitor that and if you remember our azure ad section that's where we define
tenant because tenant is highly coupled to active directory and so this is going to be for tenant-wide
services such as active directory so reporting that contains history of sign in activity audit trails of changes made
within a particular tenant things like that so there you go so last on our list here is custom
sources and basically if if none of the other previous categories fit where you want to collect data uh this is where
you can just collect data using the azure monitor api and so basically you are using a rest client and then it's
going to store it in analytics at log analytics or azure monitor so it's just really custom data custom storage
[Music] so there's two fundamental types of data that we care about when working with
azure monitor and that is logs and metrics and so azure has two services to deal with that one is called
azure monitor logs and so this service collects and organizes log performance data for monitored resources data logs
are consolidated from different sources into workspaces which we'll talk about platform logs from azure services is
something you'll collect it'll collect also log performance data from vm agents the usage and performance data from
applications also can be consolidated and these workspaces can be analyzed together together using a query language
which we definitely cover in this course and you work with these log queries and the results interactively using
a subservice called log analytics which is something we'll definitely be covering
then there's azure monitor metrics so this is for the metric side so collect uh numeric data from monitor resources
into time series database metrics are numerical values collected at regular intervals and describe some aspects of
the system at a particular time uh lightweight and capable for of supporting near real real-time scenarios
useful for alerting and fast detection of issues and you can analyze them interactively using metrics explorer and
we cover that as well so really these two services are like the data stores or the databases for the respected types of
data and then each of them have the sub services which is for exploring and some other additional services but let's get
into that more detail now [Music] so when you use log analytics you're
going to probably want a workspace and this is a unique environment for azure monitor log data each
workspace has its own data repository and configuration and data sources and solutions that are configured to store
their data in a particular workspace so it's really interesting because if you go over to azure monitor you can use uh
log analytics without creating a workspace but i believe that if you want to
isolate a lot of your data and install like maybe like collect it outside of azure services for other
things you're going to need a workspace and it's going to also have a lot more robust options so that's something
you're going to end up doing is creating workspaces and it's a good habit to do but there's not a lot to talk about
there but let's move on to actual the query language which is really the meat of log analytics
[Music] i want to quickly touch on log analytics and so this is a tool used to edit and
run queries within azure monitor logs and so the idea is it kind of looks like something you use to connect to a
database because it really is structured like a database with tables and columns and things like that and it has this its
own query language called a kql and the idea is that you input your queries and it's going to output
results for you and that's something we're going to look at in greater detail is that
kql language so that we know how to use that panel let's go talk about workspaces
let's take a closer look at custo and it's query language so azure monitor logs is based off of the azure data
explorer and along came with it is the cousteau query language also known as kql and this is the way we're going to
uh filter and uh sort and do things with our logs so kusto is based on a relational database management system
and it supports entities such as database tables and columns there's also this thing called clusters and uh kql
actually has a lot utility in azure because it's not just in monitor logs and data explorer you can use it in log
analytics log alert rules workbooks dashboards logic apps powershell azure monitor log
apis so it's definitely something you can be using across the board in azure and so they have some basic operators
they have lots of operators that you can use so you can do calculated columns searching filterings on rows group by
aggregates join functions and we're going to be looking at a lot of the operators in more detail after this
slide here but anyway the queries execute in the context of a
cousteau database that is attached to acousto cluster and we will talk about clusters database tables and columns up
next [Music] let's take a look at what makes up
something for cousteau and so we have a bunch of entities here clusters database tables columns and functions and so i
have this nice visual to help us uh kind of see how they all work together so the top we have clusters and these are
entities that hold multiple databases you can also have multiple clusters but it's just not being shown there in
that graphic then you have the databases themselves these are named entities that hold tables and store functions you have
tables these are named entities that hold data and a table has an ordered set of columns and zero or more rows of data
each row holding one data value for each of the columns of the table then there are the columns themselves and these are
named identities that have a scalar data type columns are referenced in the query relative to the tabular data stream and
that is the context of the specific operator referencing them then we have stored functions and these are named
entities that allow reuse of cousteau queries or query parts and then you got these external tables
and these are tables that uh live outside of your cluster i think that uh you're referencing them
from uh storage accounts and they're in blob storage so they pro i think there could be like csv files and stuff like
that but these external tables are used for exporting data from cousteau to external storage so storage uh storage
accounts as for querying external data without ingesting it actually into cousteau so
hopefully that gives you an idea the lay of the land there [Music]
let's take a look at the type of data types that we can use in cousteau but let's just define what scalar and data
types are so scalars are quantities that are fully described by a magnitude or numerical value alone
so the idea is that it just means like a single value then you have data types and a data type defines how a piece of
data is interpreted so an integer number could be a data type if you've ever used programming languages like data types
and stuff like that or an sql database you should be familiar with those so incusto data types are used for various
things so they can be for columns or function parameters that expect specific data types
uh that they want them to be and so let's just go through the quick
list of data types because there are quite a few here and i've summarized them quickly so the first is billion and
this represents a true false value then you've got date times your date and this represents a date and these are stored
in ut utc time zones you got decimals these are numbers that like are 12.88
or something with a decimal in them you have integers which is like a whole number
then you have longs which are also integers but with a greater range you have your guids or your uids and these
are unique values so it's like a random hash there that you have and the idea is
just so that you have unique values but they're not like one two three four five so people can't guess the size of your
tables then you have reels these are double position floating point numbers so these are really really big num
numbers something you probably use if you're doing um things with uh finance or numbers then
you have strings and these are unicode strings and they're limited by default to one megabyte and the way you do them
is you have them wrapped in quotations we have that hello world you have time spans these are interesting these are
little time intervals represented so like 2d would equal 2 days 30 m would equal 30 minutes one tick would equal
100 nanoseconds and they have a variety of those then you have dynamic and this is a
special type that can do a bunch of things so it can accept primitive scalar data types so the idea is it's kind of
like a data type that can accept any value or it can accept an array of any value
or it can accept a property bag which you know if you're from the javascript world looks like a json object so you
have a key and value and it can even be nested and then last which is a data type but you're not setting these as
your columns which is the null and that's a special value that represents a missing value and any of these data
types can be a null value so like you look at boolean it can be true false or null and date time can be a date time or
date or null and that goes for all of them [Music]
let's talk about cousteau control commands this is really part of the uh kql language but it's a way of kind of
working with the the databases and tables and stuff like that if you've ever used postgres and you've done like
forward slash du or you have these commands that didn't really have to do necessarily
querying but like managing the databases and tables and stuff that's what uh control commands are so i'll give an
example so here we have a control command to create acousto table so they always start with a period so period
create table logs and then it makes two columns and that when you're using the query um like log analytics and stuff
like that you just type period and then start typing it and it'll show you all the list of
control commands it was really hard to find the documentation a full list otherwise i would have picked some out
for you but generally you can just kind of explore that way or the most common ones
you're going to come across is all going to be in the documentation even though there's a very long list of
control commands a very common one you're going to be using a lot is show so for example we can do dot show tables
and then count the number of tables there are and so i just want to show you that if you type dot show
it'll show you what it can take as a second parameter and there's a huge list there so just an
idea like how you can work the control commands [Music]
take a look here at custom functions and these are reusable queries or query parts and kusa supports several kinds of
functions the first being stored functions which are user-defined functions and these are stored and
managed as one kind of database schema entity then just to define user user-defined
functions uh one step further they come in two categories we have scalar functions so these uh have input scalar
data types and output scalar data types and then tabular functions so these uh take input tabular data and they
output tabular data tabular data is just when you're working with multiple rows in a table
then you have query defined functions which are also user defined functions and uh that are defined and used within
the scope of a single query so very similar to stored functions but they're it's all based on scope okay
uh then the last on our list here is built-in functions which are hard-coded and these are defined by cousteau and
cannot be modified users and they just give you a lot of utility let's take a look at some of these built-in functions
starting with special functions so select cousteau entities so you might want to say i want to select a cluster
and then i want to select a database and then that table then you have aggregate functions which
perform calculations on a set of values and return a single value so maybe you want to do a count so you see
where it says count and parentheses then you have windows functions and these operate on multiple rows and
records in a row in a row and so one really popular one is row number and so this could count
the number of rows maybe you have to figure out the row numbers in relation to your query
and so that's something you could do there [Music]
let's take a look at some scalar operators and there's quite a few here and these are for working with scalar
data types to do comparisons and there's a bunch of different categories so let's just uh quickly go
through them here the first one one is bitewise operators that's where you're working with zeros and ones and flipping
them around to do different things so you've got the binary end the not the or the shift left the shift right uh the
xor and these make a lot of sense if you know how to work with binary math if you don't don't worry about them but i just
want you to know if you do they're they're there for you use next are logical operators so this is what you're
going to be more familiar with like equality so equals in equal and or or so this is pretty darn simple
then you have date time and time span uh arithmetic and so
we have add or subtract date times you can do that you can add subtract divide or multiple time span so add one day
plus two day makes three days uh yeah then you have numerical operators and these work on inch longs
and reals so you can add subtract multiply and divide you can do module where it's
determining whether it's a whole number or not so or divisible by a whole number so like let's say you have 17 and you
put it through there you say module 17 by like modulus 2 if it's not divisible by two like
perfectly resolves into zero it will return either zero or one then you have less greater equal not equal uh less or
equal than uh greater or equal so you got the idea there equals to one of the elements so
that's in so you do the in it's very familiar if you're used to using sql then you have the opposite where you put
an exclamation in front of it actually there's a lot of extra operators that just have the same thing with an
explanation to mean the opposite especially with string operators so here's a bunch of string operators
and they almost all have exclamations uh a variant with explanation in front of them then you have your between
operators this matches the input that is inside an inclusive range so you can say between 1 and 10 or between
these date times so there you go [Music] let's take a look at cousteau tabular
operators and i really want you to pay attention to this one because this is where all the power of kql happens so
these perform comparisons against a bunch of rows that's why it's called tabular operators
and there's a lot of them and we're going to look at the most common ones because it would take me forever to go
through all of them and you're not going to remember them all so let's just look at the ones we actually care about
working with so the first is count this returns the count of rows in the table then you have take this returns up to a
specified number of rows of data you have sort this uh will take the rows of the input into the
order by one or more columns you see buy damage property descending so let's just
sort then you have project returns a specific set of columns and where filter a table
to subset of rows that satisfy a predicate before we move on the next slide i just want to point out like if
you've worked with sql these are very familiar right like take is like limit sort is like order by pro
project is like select okay moving on to the next one here we have top returns the first and records sorted
by uh specified columns this is kind of like a shorthand i think it like makes uh take and sort and just abbreviates it
to one line you have extend creates a new column by computing a value so notice that it says duration equals n
time minus start time and then we're using duration somewhere else then you have summarize aggregate groups of rows
that's kind of like group by in regular sql and then render so renders
results as a graphical output and to me that's a really cool one [Music]
moving on to metrics we're going to be looking at metrics explorer so message explorer is a subservice of azure
monitor allows you to plot charts visualize correlating trends and investigates spikes and dips in metric
values so the idea is you can create a really cool graph like that and you can make a variety of different ones based
on how you want to chart it out let's talk about how would we define one of these metric visualizations in metric
explorer so you got this cool bar and the idea is you got to fill it all the way to the end and then that will
visualize it for us the first is the scope so you're going to open this up and it's going to allow you to um select
resources so it'll show you like subscription and stuff like that and resource resource groups but always
makes you select a resource at the at the end and some services you can select multiple resources and some you can only
select a single instance so the storage account there it's only going to be a single one there so i had one called
davestrom institute i made and then there you're going to choose a namespace so this is a specific group of
metric data within a resource so notice like it makes sense for a storage account that would show account blob
file queue table it's going to vary based on your service then you actually have the metric you care about so we
have availability uh aggress ingress etc and a bunch of other things you're going to choose one of
those and then you choose how you want to aggregate aggregate it so average min max etc and again this is going to be
totally different based on what resources you choose but that's generally how it works
[Music] let's take a look here at azure alerts and this helps us be notified when there
are issues found within the infrastructure or application and this allows us to identify address issues
before the users of your system notice them and so they come in three flavors we've got metric alerts log alerts and
activity log alerts and when alert is triggered you can be notified or have it take action so here is kind of the
anatomy of an alert and we have the alert rule this defines who we should monitor like the the service and uh
uh like the the definition of when it is triggered which is going to be the next
part here so a resource such as a virtual machine designated as the target resource will emit signal so it's going
to be emitting a data payload and it could be of the following types could be a metric a log activity log application
insights you can kind of see how that ties to the types of alerts then you have the criteria or logical tests this
gets evaluated and determines are we in a triggered state it could be like percentage cpu greater than 70 percent
then you have your action group which contains actions um and those actions uh will be uh performed when it is
triggered and actions could be things like run uh run an automation run book use azure functions itsm logic app web
hooks or secure web hook on the other side there we have um this box over here and this is all about the
state of your alert and so we have monitor conditional alert state so monitor's condition is set by
the system an alert state is set by the user but the idea that is there is so you can define where it is because
you might want to have a history of saying okay i've resolved this issue so i'm marking this
as closed and that'd be like an alert state there and so there you go [Music]
this is probably not going to be on your exam but it's super cool to show and it's only one slide away so let's talk
about it which are azure dashboards these are virtual workspaces to quickly launch tasks for data for day-to-day
operations and monitor resources and build custom dashboards based on projects tasks or user roles so the idea
is you can go in here and you have like this little tile uh editor and you can drag stuff over so you can see i can
build like a video in and a link to the help support and put a clock and some metrics i care about and some markdown
so it's a really good way of building out these customized dashboards based on a user role to really help you focus on
what you have to deal with within your infrastructure on azure [Music]
let's take a look at azure workbooks just because this is in the scope of azure monitors so we should cover it so
workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the azure portal
to allow you to tap into multiple data sources from azure and combine them into a unified interactive experience
and so the key word here is story it tells a story about the performance and availability about your applications and
services so this thing kind of looks like a dashboard but it isn't and it's highly
customizable but it's not very clear in here but really what it is it's like it's like the form of a document and the
idea is that imagine that you have a document and then you can embed analytics in it
that are real-time analytics so that you can visualize uh uh and and kind of investigate and discuss performance and
availability and stuff like that you can kind of think of it like if you ever worked with jupiter notebooks but it's
really for performance and monitoring um it's kind of like that okay and these things are highly customizable and
really useful and uh if you ever use datadog databa dog has its own thing i think they call them notebooks as well
they have workbooks they're called notebooks it's the same thing but it's a really great tool for
really understanding your your performance [Music]
hey this is andrew brown from exam pro and we're looking at azure monitor cheat sheet this one's a three parter so let's
jump into it azure monitors comprehensive solution for collecting analyzing and acting as telemetry for
your cloud on-premise environments and even though i don't have it in the slide here it is an umbrella service i mean
there's a lot of things underneath this and that's why we have three pages of information and create visual dashboards
smart alerts automated actions log monitoring to obtain observability you need to know
metrics logs and traces you can have use them all together using them in isolate but that does not give you observability
metrics is a number that is measured over a period of time logs as a text file that contains event data which was
happening and traces is a history of requests that that travels through multiple apps and services
so we can pinpoint performance or failures azure monitor collects two fundamental
types of data from sources logs and metrics which kind of matches up with our
theoretical information here right so azure monitor logs collects and organizes log and performance data for
monitoring resources data logs are consolidated from different sources into workspaces platform logs and azure
services logs and performance data and virtual machine agents usage and performance data for applications can be
consolidated in a workspace you can be analyzed they can be analyzed together using a sophisticated query language
which we'll talk about here in a moment or review i suppose um work with log queries and their results
uh interactively using log analytics azure monitor metrics collects and that's the second part here right so the
two fundamentals azure monitor metrics collects new numeric data from monitored uh resources in a time series database
messages are new numerical values collected at regular intervals and describe some aspects of a system at a
particular time lightweight and capable of supporting near-real-time scenarios useful for learning and fast detection
of issues you can analyze them in interactively via the metrics explorer onto page two log analytics is a tool in
the azure portal used to edit and run log queries with data in the azure monitor logs login elixirs use the
language called q kql log analytics uh i think that's cousteau
right so log analytics workspace is a unique environment for the monitor logs data each workspace has its own data
repository and configuration data sources and solutions are configured to store in their workplace and i really
wish i put the word data like in here it's not important for uh the associate but for other exams it helps you
understand if you think of it as a data lake azure monitor logs is based on the azure data explorer and log queries are
written using the cousteau query language kql can be used in a log analytics log
alert rules workbooks azure dashboards and all over the place kosovo is based on a relational database management
system so you'll see database tables and columns some query operations include calculated columns searching and
filtering on rows grouped by aggregates joint functions if you're used to using sql you know what i'm talking about
cousteau queries execute in the context of some cousteau database that is attached to acousto cluster crystal is
generally composed of the following entities clusters database tables columns functions let's talk about them
quickly here clusters are entities that hold databases databases are are named
entities that hold tables and stored functions stored functions are named entities that allow reuse
queries or query parts tables are named entities that hold data columns are named entities that hold scalar data
types and then you have external tables which are identities that reference data stored outside the cousteau database
these both count as tables that's why it's not in the list up here but generally these are pretty
self-explanatory metric explorers is a sub-service of azure monitor that allows you to plot
charts visualize correlating trends and investigate spikes and dips in the metrics values to visualize the metric
you need to define the scope the namespace the metric and the aggregation we're on to the last page here
alerts and notify you when issues are found within your infrastructure or application they allow you to identify
address issues before the users or system notice them azure has three kinds of alerts metrics logs and activity
log alerts sorry there is a diagram if you remember our follow along i actually break down
all the uh structure of an alert you should go review that i
didn't put in the cheat sheet because it just would have been too much here azure dashboards are a virtual workspace
to quickly launch tasks for day-to-day operations and monitor resources azure workbooks provide a flexible canvas for
data analysis and the creation of rich visual reports within the azure portal it tells it and just to highlight what
azure workbooks are for it tells the story about the performance availability about your applications and services
then we have application insights and this is an application performance management an apm service and it is a
subservice of azure monitor that's why all these things are under azure monitor automatically detects performance
anomalies includes powerful analytics tools to help you diagnose issues to understand what users do with your app
designed to help you continuously improve performance and usability works for apps on the dotnet node.js java
python hosted on premise hybrid and public cloud works everywhere basically integrates with your devops processes
can monitor and analyze telemetry from mobile apps by integrating the visual studio app center to use application
insights you need to instrument your application to instrument you need to install the
instrument package sdk or enable application insights using application insight agents when supported apps can
be instrumented from anywhere when you set up your application insights monitoring for your web app you create
an application insight resource into azure monitor you open the resource in the azure
portal to order in order to see an analyze telemetry uh collected from your app and
last the resource is identified by the information instrumentation key the ik so there you go
[Music] hey this is andrew brown from exam pro and we are taking a look at azure api
management so this integrates existing back-end services into modern api gateways and uh this service uh there's
a lot to it so we're going to be doing quite a bit here maybe more than we have to but it is a very powerful service
especially if you're trying to be a developer on azure so we're going to just make sure we spent a good amount of
time with it so it follows the api approach of decoupling the front end and back-end teams with the help of api
mocking the azure api management handles the full management of your apis it
centralizes the securing versioning documentation and compliance from your backend
services in a single endpoint so very powerful tool but let's get to it [Music]
so let's go over uh key concepts or key components of api management i do not have a fancy visual for this it's just
very hard to visualize but uh we will learn all this stuff as we go through it so let's talk about what we have here so
we have an api that represents a set of operations api operation which connects an api
endpoint to its back end we have a product which is a logical grouping of apis
a single or group of apis make up a product so this is how your apis are presented to developers so it can either
be public or private we have a backend that represents backend services in your api
there are groups these are used to manage the visibility of products to developers such as administrators who
have full access to the api management developers for users with access to the developers portal with permissions to
build applications guests users without access to the developer portal portal but with the reading permissions and
some services there's the idea of developers so this belongs to a product group and each
developer has a primary secondary key to call that products api there are policies uh
configurations and validations that are applied in progress to incoming requests and a
com responses which you will see in closer detail in the upcoming slides there are name values so these are key
value pairs used with policies values can be a result of an expression uh there are gateways so this is where your
api calls it and are received and policies are applied to incoming requests then there's the developer
portal so this is where developers can access all the apis and products listed by your apim alongside the api's
operations documentation developers can also request access to your apis from the developer portal but again uh we'll
figure this out as we go through okay [Music] all right let's take a look here at the
echo api service so when you create uh apim gateway that's the first thing you'll do you'll get by default uh this
thing called echo api and i was like whoa what is this thing i wasn't sure what it was
and i thought it was kind of interesting that's why i expand upon it here which i didn't really see much of it in the
documentation so i had to piece things together but the idea is that echo api provides a bunch of
existing endpoints and these are mocked endpoints to a non-production azure service used to test azure api
management so you know if you don't have an api yet but you want to interact with it it's going to go to the service
called echoapi.cloudapp.net forward slash api and again it's just a dummy a dummy
application to test against so that's what that is so if you see it you don't need it you can absolutely delete it but
you always get one when you create a gateway okay [Music]
all right let's take a look at the feature comparison for apim because it's one of those services
where there's a lot of features but they're not always available to you depending on what plan subscription what
kind of user you are so let's take a look at how the features change based on uh what plan you're using so across the
top we have consumption developer basics standard and premium red is what's not available for that particular plan green
is what is so for azure ad we do not have it for consumption or basic
for virtual network support it's just it's just not supported for consumption basic and standard
for multi-region deployment availability zones we're only getting that for premium for multiple custom domains
premium as well and also for developer developer portal the built-in cache and built-in analytics are not available in
the consumption plan if you want a self-hosted gateway better be on the premium basically almost developer
always says yes because you have to play around with things right um so tls settings is for everyone
external caches for everyone client certificate authentication policies are for everyone which is very generous back
in restore you're not going to get in the consumption model and that's not all we got a few
more here so if you want management over get direct management api azure monitor logs and metrics static ip or web
sockets api you're not getting those in the consumption model now graph api graph graphql
api is available for all of them so there you go all right let's take a look at api
authentication so in order to authenticate with their apis we configure those settings under our
subscription setting so it's as simple as doing a checkbox there so if this description is required only developers
with a valid access key can use it and so the idea here is we can configure where the api will receive
those access keys which can be sent as the header or query string so there are some options there if this
thing is not checked then that means anyone that is anonymous has request requests will be allowed so it's as
simple as having that checkbox there and having a key so there you go [Music]
all right so let's take a look here at groups for apim so groups are used to manage the visibility of products to
developers and so we can break these down into some broad categories we have administrators so they manage api
management service instances and create the api's operations and products that are used by developers then we have
developers they are authenticated for the developer portal the users that build applications using uh your apis
developers are granted access to the developer portal and build applications that call the operations of an api then
you have guests so these are unauthenticated developer portal users such as prospective customers visiting
the developer portal they can be granted certain read-only access such as the ability to view apis but not call them
so administrators can also create custom groups or use external groups in an associated azure active directory tenant
to give developers visibility and access to api products a user can belong to more than one group so there you go
[Music] all right let's take a look here at front ends and backhand starting with
front end so front ends to find the route or end point and the documentation configuration around that endpoint so
the idea here is we have front end and we have a single method here uh which is post
forward slash resource and if we were to open that up you can see that we have i'm just getting my pen
tool out here we have a description for our documentation the display name the name
of i can't remember what the difference for the name is but it's called create resource we can set its url that's
method and then down below we have additional parameters so api does not host apis but it creates
facades for your apis that's a key thing to remember api so api management it does not host apis it's created facade
for your apis so let's take a look at the back end so for back ends you can set the following
types you can set a custom url so point to a server where your service is running you can say to go to azure
resources integrate directly with a resource such as azure functions app service container wrap logic app i'm
just missing the p there but it's two p's so you have the idea up here where we
can see custom url azure resource azure service fabric okay we have some additional options here so
we can authorize credentials that present requests credentials to the backend service
there are options like headers so that's http headers we can fetch from named values we can query based on
query string you can fetch from name values there as well you have control of client certificates
so x 509 certificate certificates which we do talk about in this course which are stored in azure key vault
which is the section that we're talking about there so you're just seeing those options headers queries and client
certificates but yeah there you go [Music]
all right let's take a look here at policies for apim so api management policies allow you to change the
behavior in multiple stages of your endpoints request lifecycle you can update any part of the request response
message such as the headers bodies urls and there are four areas where policies can be applied we have inbound for
incoming requests backend before requests reach your backend outbound before sending responses back to the
client and error when a request encounters an error just to kind of visualize it we have our front end
incoming processing backend outbound processing and so the idea is that here
for the inbound processing that's where we have policies apply that's the one they're talking about right here
and then we have for the back end so before requests reach your backend fbm processing and then uh error is not
visualized because it would be whenever an error would occur here okay azure has a collection of
policy groups which contain many policies you can apply and we got a big list here we have access restriction
policies advanced authentication caching cross-domain transformation dapper integration validation policies graphqls
there's a lot of great policies that we can already utilize when an error occurs no other policies
are applied except the error policies however other policies were in effect prior to the error they will not be
removed product level policies apply to all api operations within a product so now let's
go take a look at an example of a policy okay [Music]
all right let's take a quick look here at a policy example so you're going to see uh this policy here
and we're doing it right on the echo api so this is for an outbound policy to cache the response in a get operation
i believe that all those policies are in the documentation you can easily copy and paste them in uh we
do do a follow along for policies that was something i definitely covered i just don't remember if it was part of
the system or i mean it must be because it says here retrieve resource cash but anyway the
idea here is that um this policy stuff is in an xml language um not super important to learn what it
is but the fact is is that there is a lot of available policies available for us so we can pretty much copy paste and
figure it out from there if that is clear and here is kind of another visualization that i just want
to show you relating to our policies here uh just because it kind of maps up my little
graphic where we have the front end the inbound processing the outbound processing here but if you notice here
it says policies i know it's really small but you can see it says base cache lookup rewrite url and here it says base
and then cash store so the idea is you can see what policies are being applied there so there you go
[Music] all right so what we're going to do here is look at all the possible
policy groups that are provided by azure uh it might say policy groups i mean the policies within those policy
groups so you get an idea of what kind of policies you can do to transform manipulate
filter during a transit of a request from a response through apim and we're not
going to look at all the code examples but i'll pull out a couple per policy group here
we're starting with access restriction policy so the first is check http header so enforce existence or value of http
header then we have limit call rate by subscriptions that prevents api usage spikes by limiting call rate on a per
prescription basis that's pretty cool when you look at other providers like aws where they have api gateway there's
like a fixed limit on there i don't remember there being any kind of policy to do that so i like how you have the
flexibility there to choose hopefully it's on there by default in that base i don't know limit call rate by keys or
prevent api uses spikes by limiting call rate on per key basis restrict caller ip so filter so allow deny calls from
specific api addresses or add an address range that could prove very useful set usage
quota by subscription so allow use allows you to enforce a renewable or lifetime call volume and or bandwidth
quota on a per subscription basis set usage quota by key so allows you to enforce a renewable or lifetime call
volume and or ban with quota on a per key basis validate jwt so enforces existence and validity of a jwt
javascript web token yeah extracted from either a specified http header or a specified query parameter gwt is very
common for authentication of client-side applications validate client certificates so enforces that a
certificate presented by a client to an api management instance matches specified validation rules and claims
let's take a look at that restricted caller ip very very simple we define ipfilter we
have an action the ip address that is allowed in this case or a range so that one is again very simple these are not
hard to figure out we have validate jwt um so there there is some stuff there and uh it is
what it is okay um so we'll go on to advanced policy so here we have control flow so
conditionally applies policy statements based on the evaluation of bool expressions
forward request so forwards the request to the backend service limit concurrency so prevents enclosed
policies from executing by more than the specified number of requests at a time log to event hub so sends messages in
the specified format to a message target defined by a logger entity emit metrics so send custom metrics to
application insights and execution mock response so aborts pipeline execution returns a mocked response directly to
the caller retry retries execution of the enclosed policy statements and if
and until the condition is met execution will repeat at the specified time intervals and up to the specified retry
count return response so aborts pipeline execution and returns the specified response directly to the caller let's
take a look at limit concurrency that seems like a good one to have so here we have an inbound
then we define our back end we set our limit concurrency it's setting the forward request to a timeout of 120 so
preventing close policies from executing by more than specified number of request time we have mock response that one
would be very very useful uh you know like you just want to you know you don't want to have a real response you just
want to put whatever you want in there so it gives it back a status 200 for application json
more more advanced policies here so send one-way request send a request to a specified url without waiting for
response send request send a request to specify url
send http proxy so allow you to route forward forward request via http proxy set
variable persist a value in that name context variable for later access so here's an example one where we're
setting that variable so we're saying is it mobile and we want to know is it mobile so the idea is it'll take that
value from that agent and then we'll have that variable available for us to determine was the request mobile
we have wait so wait for enclose send requests get value from cash or control policies to complete before proceeding
set request method allow you to change the hp method for a request set status code changes http http status code to
the specified value trace adds custom traces into the api inspector output application insight telemetries and
resource blogs looking at authentication policies and i
don't know why that's highlighting oh you know i probably have a graphic above it that's why um so we have authenticate
with basics so authenticate with basic service using basic authentication very common for test environments
authenticate with client certificates so authenticate with a back back end service using client certificate
authenticate with managed identity very popular use case um for for azure services so authenticate the
backend service using managed identity so there's an example basic auth that's a username password you've probably seen
it if you've ever got a site where they just give you like kind of like an alert you have to enter those two in and for
this one you can see that it's going to vault for the managed identity
caching policies so get from cash perform cash look up and return a valid cash response when available probably
very popular policy to be used store to cash cash response according to specified cash control configuration get
value from cash retrieve a cash item by key store value in cash store an item in the
cash by key remove value from cash remove an item in the cash by key so here the examples obviously very big for
get from cash but here it says uh um vary by developer cash key lookup so i guess you're performing you're giving
it these things now sometimes they're not always very clear when you're reading them but
that's fine uh for cash store values the store value you provided a key
there is the value how the duration i'm assuming is how long it lists for maybe the ttl
cross-domain policies so we allow cross-domain calls this is something you probably really
really would want to enable especially if you're building applications that are not on the same
domain so i could see people using this quite a bit makes the api accessible from adobe flash microsoft's
silverlight browser-based clients did not expect that as a description of course i was thinking of cores that's
why ad cross origin resource sharing support and operation or api allow cross-domain calls i feel like this
one's going to happen a lot i didn't know microsoft silverlight still existed if people don't know silverlight is a
competitor adobe flash open i guess adobe flash probably does html5 now i'm just thinking of old fash flash players
jsonp adds json with padding support to an operation for an api to allow cross-domain calls from javascript
browser-based clients so that is the cross-domain one very simple
there's cores and if you've ever seen cores this looks the same if you're on aws if you're anywhere else it's the
same xml it looks the same so it looks big but it's not as scary and you'll come across it quite often
especially with apis transformation policies convert json to xml converts requests or response bodies
from json to xml convert xml to json you get it find and replace a string and a body mask url in the content
set backend service set body these are very clear set http header
set query string parameter if you want to read these you can but they're very straightforward rewrite url
that's probably a very popular one but maybe the example is too large so i don't show it here transform xml and x
xslt let's look at uh xml json because i thought that one was kind of interesting so you say xml to json
and i guess it just turns it into json http header is something very common that you'll be doing
you can just set a header for whatever values need to pass along we have zapper integration policies so
send requests to a service send message to pub subtopic trigger output binding and there's an
example of a trigger output binding validation policies this last group validate content validate parameters
validate headers validate status code validate graphql request so we'll take a look at parameters so
here's an example where we have a parameter and it says prevent prevent
um detect detect uh prevent ignore ignore ignore so i
guess the idea is it's just saying like if those parameters are present allow them if not ignore them validate status
code i figured this would be like a little bit more interesting this one but it's not
status code so prevent unspecified status code action so do not allow it to have an unspecified
code action i i didn't know that that's possible i thought you all status codes are always
returned but uh you know i guess you're learning something new every day but those are all the policy groups and
hopefully you get an idea of policies you can apply because that's what that's the greatest power of api is those
policies okay [Music] all right let's take a look at different
ways we can define apis within apim so the idea here is that you're ready uh to create an api you have a gateway you
want to create an api so you hit that add new api button you got a bunch of options here so you can
define one manually the http endpoints web sockets graphql
of course that's going to vary what you want sockets is great for real time stuff
maybe you're making a video game maybe you're making a chat app graphql is becoming a popular alternative to
rest based applications like standard endpoints so graphql allows you to define a query to query
data um you know personally it's not my favorite but i like the fact that it's
all rolled up under one api gateway in api in azure the next thing is we can define based on
a schema standard for importing defining apis we have open api version three wadl wsdl
and the the latter two i don't know much about and so um following this uh video we're gonna go talk about these
three standards here um because i do think it's important to know all those three because that's the real way you're
gonna really get uh apis into here you're not really gonna be doing it manually
and the last one is really great if you want to spin up an api that you know you want to integrate right away very common
ones would be app service actually i guess all three of them
really to be honest logic app or app service or function app and i'm pretty sure in this we will show you with
function app in the follow along there so those are some options there just to get quickly started but let's take a
deeper look at these um open definition standards for defining apis and then importing them starting
with open api [Music] all right let's take a look here at open
api so here is an example of one written in yaml i have a feeling that it can also be done in json but just taking a
quick look here you can kind of get an idea you have paths that is an endpoint or uh yeah a method
if you want the method that to be used would be posts here you can provide a summary description the contents should
it be application json schema information and what responses should come back so
pretty straightforward it gets a little more complicated than that but that is the best snapshot i can provide to you
of that language so open api specification oas defines a standard language agnostic interface to restful
apis which allows both humans and computers to discover and understand the capabilities of the service without
access to source code documentation or through network traffic inspection and open api is really the leading one this
is what pretty much everybody uses nowadays no matter what cloud service provider you go to they're going to have
an option for their api gateway to import open api swagger and open api used to be the same thing but as of
version three of open api swagger and open api are two different things so open api is a specification swagger are
tools for implementation for the specification if you want to use i think it's like swagger it's by
bearsmart and bearsmart i believe is the one that came up with open api to begin with
um and so open api can be represented as either json or yaml as i said earlier but there you go
[Music] let's take a look at wadl and wsdl they're very similar that's why we're
giving them one slide together so web application description language wadl and then web service description
language wsdl these are specifications defined by the web consort consortism uh was it w3 w3g w3 something w3 w3c
probably and i believe that wadl was originally made by sun microsystems i don't know who made wsdl but it
definitely is under the w3c and here's an example of wsdl very
very very heavy because it's all xml most people like json yaml that's why open api is so popular but just take a
look here at wadl and wsdl the idea is that you will have some things that are similar so
with wadl you define an application for wsdl the equivalent is definition here we
have grammars here we have types here we have resource here we have interface method operation
request response input output param element param simple type to me wadl looks more
like a uh like how we would describe in terms of terminology i cannot remember
which one is more advanced because one has more capabilities than the others but to be honest you're not going to be
using any of these you're going to be using open api but i just wanted to show you them and the fact that you can
translate their xml over to that there in this table so hopefully that gives you kind of an idea there you
go let's take a look at the developer portal so the developer portal is an
automatically generated fully customizable website with the documentation of your apis it's where
api consumers can discover your apis learn how to use them request access to try them out so here is an example of
the default one that you get to be honest i did not know how to use this i tried really hard because i thought it
was a very cool idea this is actually the second iteration of the developer portal so there was an older one
that was different from this but this is the new one and i like the idea but in execution i'm not sure exactly what to
do here but the idea here is you need to publish you'll need to publish for the developer portal to be publicly viewable
you can save revisions of the portal to quickly roll back to previous versions you can apply custom domain for your
developer portal so idea here is uh you know there's on the portal overview to see it there's
a link there that you click and that's how you view it not available in the consumer plan
available in all the other plans i guess the idea behind the developer portal is like if you're trying to sell a cloud
service but it's api driven and they just pay access the developer portal or i guess it's also to read the
documentation for people that need to utilize it but yeah in theory i like the idea
execution i don't know [Music] all right so uh there is authentication
for the developer portal that you can set there's a few different ways to do it you can do active directory b2c
identity providers like google microsoft facebook basic authentication which is the default version if you're not
familiar with basic auth the idea is you can enter a username and password that gets prompted to you
um so not super complicated there's also delegated authentication which allows you to use your own web app sign in sign
up process product subscription instead of the built-in developer portal built-in functionality
and just to figure out where that is under the developer portal you have identities you could add those you can
see there's delegation down below so not super complicated but yeah there is authentication for the
api and then there's authentication for the developer portal so just make sure you understand the distinction of those
two because it does get a bit confusing [Music] let's talk about caching so we have a
built-in cache and an external cache so api and operations in the api management can be configured via response caching
we were looking at policies we had an indicator as to how caching works response caching can significantly
reduce latency for api callers and backend load for api providers so we're gonna just provide a caching policy to
the outbound or other places built-in cache is volatile and is shared by all units the same region in the api
management service so for that reason you have the ability to set an external cache via redis
so that would be using azure's reddit cache service so using external cache allows you to
overcome the few limitations of built-in cache so avoid having your cache periodically cleared during api
management updates have more control over your cache configuration cache more data than your api management
tier allows using cache with the consumption tier of api management enable caching in the api
management for self-hosted gateways you simply need to provide a connection string to your redis cache
so there you go [Music] all right so after a really really
really long wait like 40 minutes and i was able to do like so many other follow alongs while waiting
um this is ready so very long time 30 40 minutes we'll go to the resource here
and so we have this environment set up and we're gonna have to uh actually have something to route to
and so the idea is that we want to set up a container app and so before we can uh go ahead and do that
we're going to need to create a container app so what you'll do is open a new tab
all right and we're going to search for containers and then we have contain oops container apps and so what we'll do is
go ahead and open that and then from there we'll go ahead and create ourselves a new container
application container app is a little bit better than container instances i don't know if
this came afterwards it just serves a different purpose but container apps is pretty nice so what
we'll do is say uh we'll create or actually we don't need to create one but we'll use the same namespace
so my apim and we will call this one um my container
doesn't really matter what we call it for the region of does this have a region or is this a
global service feels like it should be a global service i don't think we chose the region so
central u.s i'm just going to stick to the same thing just in case just a good way when you're doing labs
and we're gonna start with a simple hello world container so we're just gonna make it super easy for us
don't have to do any coding to get an application running we're gonna go ahead and do create
and so now we just need to wait for that to deploy it shouldn't take too long so i'll see you back here in a moment
all right so after waiting i don't know like four or five minutes it looks like our environment is set up or our
container is deployed so we clicked into it and there should be uh there it is application url so we just open that a
new tab you should see this and that is what we're going to try to route to for
apim so what we'll do is make our way back over to apim we'll go ahead and create a
container app and here we can go browse and select our container so to make it super super simple for us
super easy as you can see i'm going to look at full for a second to see if there's anything interesting there
nothing that i care about we'll go ahead and hit create so yeah they make it really really easy
and so once the that's configured we just need to observe that this api actually works
so we'll give it a moment okay there we go so after a few minutes there it is so let's just test that our
api is working now this web app is an html page so it's not really what we'd want to return but it's the easiest
example that we can set up here uh so i'm going to go to the get request here and from there i'm going to go to test
and we are going to see it's going to send a request here notice that it's sending the subscription key so this is
what we normally have to pass along which is over here we'll do that in a different follow along here we'll go
ahead and hit send and we'll give it a moment and we get some data back so it's the
html page it's what we're looking for welcome to azure container app so that is great so we're done here we'll keep
this environment around for the next few follow alongs but there you go [Music]
hey it's andrew brown and we are continuing on with apim and uh this time around what we want to do is actually
work with some subscription keys because here we are just hitting the button that's
not how we're going to use our api right so um let's go over and set up a new product before we do that i'll go over
to subscriptions let's just show you that we do have some keys here and notice there's one called service we
came with a product and unlimited one but the service one is actually being used uh in our apim so when we were
testing that was the key that it was using so if we go back to apis and we could click into our here my
container and we go to get somewhere here if we go around click around here
settings test down below send i think gives us some information
so we'll know what key it's using so here we can see the key somewhere
or it should tell us i do know that's using the service key i can't remember how i know that but i
know it okay and so um the idea is that we want to create our own key i guess at some point
but we'll go for product and we'll make a new one i'm going to call this developers
actually call it junior developers because originally was developers but i'm just going to do junior developers
just so we're not conflicting we'll hit publish so that's okay and we need to add our api so here we'll
choose my container we'll go ahead and create it so we'll just say for junior developers even though it's going to
really have full access and this is the way that you can like um catalog out your apis or or assign them
to different groups so it's a good way to organize your stuff is to create them in products
and so if we click into here you can see that there's apis that's assigned we can assign policies if we want globally for
everything we can change we have some settings for access controls and we have um subscriptions
so we do already have the subscription key we don't have to go and create a new one
um but we should probably look at access controls because right now it's just set up as administrator access we're gonna
go ahead ahead here and add another one called developers i believe you can add or create your own groups
but it's not super complicated so there's no point in really talking about it we do want to go get the primary key
for this so here what i want you to do is click into here and oops actually that's not what i want
there was nothing to change i don't know why i hit save there that was pointless but what we'll do is go back to junior
developers here back in the products here we'll do the subscription side here because we need to see
the key which is here junior developers so if we go here and say show keys now i can get the primary key so we need that
key because we actually want to make an api request and this time we're going to use powershell for fun so i'm opening up
cloud shell up here in the top left corner make sure it's set to powershell it always doesn't hurt to learn some
powershell we could do it in bash but let's do powershell it's a bit easier to do that
and we'll give it a moment to spin up if you're doing this for the first time it might ask you to make a storage account
so you might have to press some buttons here so just press whatever buttons it wants to get to this part okay
and it doesn't normally take that long to spin up there we go so the first thing we're going to set is
a url i'm just waiting for it to give me some more information yep still going
there we go so we're going to set a new variable whoops we'll type clear first we'll type a new variable
and this needs to be well first we'll do a subscription key because we already have it in our clipboard subscription
key double quotations paste that in it didn't actually grab it so that's fine i guess we'll go grab
instead from apim the gateway url so we'll get the gateway url
that's the uh url that we need to get to notices of the overview blade and their apim app in order for us to uh
that's like the api endpoint you'd hit right so we'll type in url and we'll paste that in
it really doesn't like the right uh that so we'll have to right-click paste probably was working before for the
subscription url we just probably had to right click so we'll do subscription url equals
double quotations and we will grab this key here again so i'm just going to scroll on down here grab the primary key
you can use the secondary key but you really really don't need it so let's just paste
that in here so there we go that looks like a key so we'll hit enter
and now we need to set some headers so i'm going to do at sign uh
curly's o o p a p i m
subscription key equals dollar sign subscription key you're wondering what is all this stuff
here well um in powershell i believe this is called a dictionary another language they might call it a hash you
might think of a json object but it's just a data structure that we need to utilize in order to pass this along i'm
just double checking to make sure i named it all correctly we'll hit enter and now we'll use the powershell command
called invoke web requests i'm hitting tab to autocomplete it's really nice that it
does that it'll do that even for the flag so i'll do url here we'll do method i think we don't have to specify the
method because it'll already be get but we'll do it anyway to be verbose we'll do headers here we'll do headers and
we'll hit enter and we have access and i do the missing subscription key so
here we have a small little problem uh so i just got to double check to see what the mistake is you know what
i was staring at this thing like there's something wrong here it's ocp what ocp stands for i have no idea
well we'll hit enter and we'll try again and maybe we'll just print out the headers to make sure it's correct
there's no value in there so we'll print out subscription key to make sure that is correct
oh because i wrote subscription url this is what happens when you don't type okay so not a big deal
just learning a bit more as we go and i just want to set that again and then i'm going to print it out to make
sure it's correct there it is and we'll invoke this and we should get back
html with 200. so it says welcome to azure container
app so that's how you can do requests of course that's with powershell traditionally you might use postman or
you use your api like an api to do that uh but there you go so yeah that is how uh we work with a key we generate our
own subscription key and work work with it to make a request not in the ui
okay so i'll see you in the next one [Music] hey this is andrew brown and we are
still working with apim and this time we are going to create a policy because policies are
some of the most powerful things that api allows you to do so what i want you to do is on the left hand side in our
our gateway that we created we'll go to apis we'll go over to my container and what we'll do is we'll just click
onto any of them like get container actually i want to make a new one i'm going to call a new one called mock
because we're going to do a mock request so do forward slash mock and if we go there and go ahead and
create this so if we were to use this right now it shouldn't do anything if we test this
what would we get back we got a 404 not found because there's no page there so what we'll do
is we'll go here and we are going to go back to the design and we're going to add our own
outbound process process so it doesn't matter what comes in we're going to always have a mock response come back
we're going to do that by adding a policy so go add a policy you can see there's some basic ones like setting
headers and things like that we're going to go to other policies i'm going our policy structure back and this is for
the outbound i don't know why we click through that to get to this but the idea is that we're going to add
a new policy here so what i'm going to do is type in mock response apim policy and i'm just looking for all the let or
all the ones here so here's a bunch of policies and we're just looking for the mach one
it's not a very uh um verbose or sorry like very flexible one just basically set the uh code but it does work so what
we'll do is just paste that on in there and we'll say we want this to be a 200 and we can say whatever we want to get
back so like if we want a different type i type like um i don't know uh
csv file content type it'd be text csv it doesn't matter we can do whatever we want application ms
word right like i'm not sure what would happen but let's do it for fun
we could have just done javascript but no let's make it complicated and so now
if we were to go to this mock request we're going to test it and we'd send it let's see we get back
it says 200 mock and i mean it doesn't complain about the
content type but let's just do this up in cloud shell just to make sure that we know what we're doing
so we were doing that before so we're going to do it again okay
it's good practice so we'll just test our new endpoint make sure it works this drag is sometimes weird so i clicked it
and then it's just like it's still dragging it's very frustrating sometimes you have to
freaking open it up like that so silly but um yeah what we'll do is we'll just do a bit of uh powershell again and uh
get this working so i'm just double checking on how to do that so the first thing is we need the url what if we hit
up oh we still have our stuff from before so yeah you want to set a url for the gateway you want to set
a subscription key from before to then the last follow along and then we need the headers
but it's slightly different here because the url needs to be forward slash mock and then what we can do is invoke a
request we get back at 200 so no issues there so
there you go uh that's all it takes to set a policy there's obviously more complex policies
there but this is a simple example because you really got to go digging through these here and it's not really
worth it to do that unless you actually do it for real but there you go so what we'll do is
clean this up we're all done with apim look for that there
delete this and there you go so that is apim
and i'll see you in the next one [Music] something throughout this course that
keeps getting mentioned at least for the az204 is distributed application runtime or also known as dapper
and uh you know it's not going to show up on any of the exams but the idea is that uh it's good to know what it is it
is a microsoft thing that they had built its distributed application runtime short
for dapper provides an api that simplifies microservice connectivity so dapper is a portable event driven
runtime that makes it easy for any developer to build resilient stateless and stateful applications that run on
the cloud and edge and embrace the diversity of the language and developer frameworks and it's interesting they say
uh diversity of languages because it doesn't support my favorite language which is ruby yet it supports php kind
of explain that one to me but dapper provides basically a bunch of functionality so the idea is that you
are going to write your application code up here like whatever language you want and then the idea is that you can
communicate via a grpc or http http ai or api to a bunch of standard [Music]
services so like services service invocation state management pub sub resource binding triggers actors
observability secrets configuration and allows you to connect to edge infrastructure now
basically these are just basically wrappers so to speak for um uh you know
standard functionality that you'd want to have alongside a microservice application
dapper is not taking off as you would think it is um i don't know anybody that's really like that interested in
adopting it because it is like this thing is more in the cloud native
space like like kubernetes cncf and the thing is is that unless there's buy-in with that community and that this
is truly agnostic um you know you're gonna have a hard time to see adoption unless it meets those things it
just does not it's very interesting i think it'd be really cool to build an application on top of dapper for your
micro service applications but i strongly doubt anyone would be using this because the future of this project
is debatable but it is mentioned throughout this course so i just wanted to give you a little bit more attention
there we're definitely not going to dig any code with dapper but there you go [Music]
hey this is andrew brown from exam pro and we are taking a look at azure event grid which is a service that allows you
to manage event routing from any source to any destination so event grid provides a simple and strong
customizable event delivery process that allows you to manage at a minimum level which type of events will be received as
well as which subscribers will receive those events so here is an example graphic uh you can see event grid is
kind of sitting in the middle there where um events are being captured and being uh managed by event hub and it's
routing to other places so event grid is ideal for event driven architectures where you can subscribe to
azure resource events and publish them to an event handler or web hook you can also use custom topics to create custom
events that will be published in your event grid and it supports event fan out with
24-hour retry reliability to ensure that events are delivered it is a low-cost serverless
product that supports dynamic scalability so there you go [Music]
let's talk about event source and handlers so azure event grid is divided into two categories events sources of
services that emit data and event handlers services that receive data and in between them that is where event grid
is so let's talk about um what there is that can uh emit sources and this will just give you an idea of how
much event grid can integrate with azure services so we have blob storage resource groups subscriptions event hub
media service iot hub service bus azure maps azure container registry signal r azure app configuration azure machine
learning azure communication services azure cash for redis cloud events azure policy
and custom events so basically anything you want to get in there and i'm sure there's more data support for other
services but as you can see we're limited for space but it can
receive messages from a lot of places for event handlers the services that receive data it could be for serverless
code so think azure app functions things like that we have workflow and integration so
think service bus logic apps buffering and competing consumers so event hub storage
queue other services and applications so i think hybrid connections like web sockets or web hooks or even automation
so yeah there you go [Music] all right let's take a look at the key
concepts for azure event grid so here we have this fancy diagram at the top here we have domains so these are used to
group event grid topics that are related to the same application for easier management then you have topics these
are the end points where events are going to be sent to there's different types of topics we have system topics
these are built-in topics provided by azure services this is the most common kind you're going to be using because
they're super easy to use we have custom topics for applications and third-party topics and for those third-party topics
they emit their own partner events uh from third-party sasses to publish events so they're similar to system
topics it's just that they're with third parties um but uh yeah there's some variation there we have events these are
the actual event data that occurs within the service not visualized here but you get the idea uh publishers this is the
service that published the event so you cannot see them on this diagram but the idea is imagine you have a publisher
here and they are uh they are the ones where the event sources is coming from okay we have event sources this is where
the event took place uh event subscriptions these are the mechanism that the routes uh that route the events
sorry about that there so over here you can see subscriptions uh subscription expiration so this is
where you set an expiration for the event subscription you have event handlers which is over here um you could
say consumers if you'd like is the app or service that receives the events you have event delivery this is
the delivery of events in batches or in single events so it just depends on how you want to
send those messages you have batching so this is uh sending a group of events in a single request and there you go
[Music] hey this is andrew brown from exam pro and we're going to take a look at event
grid basics so let's get to it so the first thing we're going to want to do is we're going to search up subscriptions
because we're going to need to make sure that like in order for us to use event grid
that we have it turned on and so we'll go into our subscription and we're going to go under
resource providers providers ah there it is
and this is all the stuff that is registered all the providers that are registered so what we're doing is just
make making sure that event grid is turned on because that one's not always turned on by default and just make sure
that it's registered okay so you'll know that it's registered because it'll have a green
check mark here and it'll say registered and so once that is done we can proceed to create
a storage account because we're going to integrate a storage account into our event grid so what we'll do is
create a new storage account and we're going to create a new resource group i'm going to call this resource
group event grid basics and we're going to name our storage account event
event grid basics all right and yes you can have hyphens
you can't have anything else and we'll just make sure we spelt that right event
event grid basics i can't remember if these are fully qualified domains if they are you might have to add some
numbers on the end there but it is what it is we're doing uh usc i mean it just randomizes every time but this is where
you should probably set it to we have standard premium we'll leave it as standard and everything all seems
fine so we'll go ahead and create review and create and then it's going to allow us to review we'll go ahead and hit
create and it's going to create the resource group here we'll just wait for it to
finish uh deploying and then we'll go into the resource all right looks like it is finished
deploying so we're going to go ahead and go to that resource we're going to go to containers and we're going to create a
couple containers the first is the first one is going to be called basic we're going to leave it as private just
make sure it's basic not basics and we'll create another container called basic alt the idea is that we're
going to use an event grid in order to move one file from one storage account to another
and that's going to be facilitated by log via a logic app because that's going to be the easiest way to use event grid
so what i want you to do is search for logic apps up here
and we're going to go ahead and add a new logic app and what we want to do
is choose our resource group and we'll just go event grid basics here we are going to name this event grid
basics um maybe we'll do lg to indicate that it is a logic app or lg maybe uh l yeah lg
is fine like short for logic and from here we have a workflow or a docker container we're going to stick
with a workflow just to knock on wood we're going to just put in the same region as our
storage account so east us we have standard consumption i'd rather do consumption for this
so you pay only as much as you use because we don't need enterprise level serverless application here we just need
a consumption model we're going to leave this to disable so that is totally fine there's nothing else to do here so we'll
go review plus create and we'll go ahead and create this logic app and we just have to wait for this to
finish deploying all right so that should have been very very quick like under uh 10 seconds
there so we've gone into the resource so just click go to the resource and so we have this
very fun interface and so what we need to do is start with a common trigger there's a few different ways to get to
it but there should be something on the front here i don't know if they redesigned this
recently so i'm just gonna search start with a commentary oh yeah it's up here okay i'm
being silly um and so what we want to do is uh because this is an event grid uh follow along we want to
click on when an event grid occurs and so this is the designer where we can make things a lot easier for ourselves
and so we're going to have to first sign in to authenticate so i'm this is my tenant example training inc so we'll go
ahead and get connected there let's give it a moment we'll select andrew brown which is
totally fine and now that is connected so that is great so once we are signed in we can
click continue and we're going to go ahead and select our subscription here
and we need to choose a resource here so i guess in this case it's going to be
event grid event grid or
hmm i could have swore yeah yeah i think that's what we want to do let me just
double check here oh you know what it's just not for some reason i'm i'm searching it's not auto completing
properly okay i just wasn't sure there and as far as i can remember this would be probably an event grid topic
and then we need to give this a resource name so um
let me just think about this for a moment okay all right so i think i understand where
my confusion was it was because we click continue and i never i didn't see event quit anymore so i thought we had to
configure it when it was already configured right so this is where we were so we're not we at this stage like
event grid is already hooked up so it's ready to be triggered so this is the step that follows into it which is where
we want to do our storage account so that's where i was getting confused so we'll choose our
subscription here it's okay you know if you never get confused just step step back a couple steps and just double
check what you're doing happens to me all the time so um what we want to do is actually
connect storage uh storage accounts so we type in storage accounts here great uh we'll have that selected and
then we need to select our storage account so this one's called event grid basics
and then we're gonna have to enter in um some additional information event type so we want to have it happen when
we add something to the container so the basic container so we'll do blob created
and then from there we need to actually filter out the information so we need to add a new parameter and i think we'll
have to do it on the prefix filter so a filter like whatever yeah so that's probably a good idea
because then we could place it into a particular place and i believe that
there are very specific filters that you can do for this because
if i recall there's like standardized ones yeah see here like it's always going to be
forward slash blob services default containers etc and you'll know that because you know if you read the
documentation and you have to do that stuff you'll figure that out so i'm just going to type it by hand here blob
services default containers and then we can put our container name so basic
uh and i believe we have it without the s there so
uh services see i don't trust my writing here so i'm just gonna copy paste it in okay
and that looks good to me so i think that is what it needs to be so we'll go ahead and hit the next step
so the idea is anything in that folder like when something's added to that folder then follow up with this
operation right um and maybe before we do that we should probably um
you know observe that this stuff works that's probably a good idea so what i'm going to do here
is i'm going to make my way back over to our storage account so just close this tab here i'm going to open a new tab
and we're going to make our way over to storage accounts and we'll go to event grid basics
and we will go to containers and we'll click into our basic container because i want to just see that this is
working and i'm going to need a file to upload so let me just go grab an image really
quick all right so i just grabbed an image off the internet so i just have data here
but before we upload we probably should save what we have because if we don't save it we're not going to be able to
observe it so i went back to logic app and we just hit save in the top left corner
so we'll give it a moment to save and it looks like it's saved now i'm just going to go back over to here
if we can look at some of the code that gets executed i'm just trying to remember
where it is because once it executes we want to um see what happened right so what i'll do
is i'm going to go all the way back over here and i'm just going to go and drag or actually i'll hit the upload button
so i don't trust that that there and i'll drag it onto here nope i still don't trust it so what i'm going to do
is just click the files and i'm just going to grab it this way and say open and we'll do upload
and so that is now uploaded and so there is somewhere where we can observe um where stuff has happened so i'm just
trying to remember where it is um [Music]
i mean we could run the trigger yeah we probably should run the trigger right
run i think it's running so we'll just give it a moment okay
you know i was thinking about it it doesn't make sense we shouldn't have to run it because it should just happen
automatically i think it's on the overview page ah okay
so if we look here we can see the run history and so and there's also trigger history of when the
things are triggered so we could we could manually fire it but doesn't make sense so i think this is the run that we
just did if we click into here yes this is what it is so here we can see what what has happened so if we expand it
we can see the inputs right so it's we have a blob created um it might show some information so
here we can see data yep and it's a webp file and so it's gotten this far through and so that's a great way to kind of
like debug so you can uh logic run app you can do it each step but right now we are using um event grid
to do that integration right we're just doing it through logic logic app because it's a lot easier
so now that we have that what we should do is go back to our designer and we're
going to have to add the follow-up step um so we have this oh yeah okay so that's the first step
event grid doesn't show up there which is weird but um so we have this step here from our storage account and so the
next step what we want to do is put it into another container so that will be the
tricky part um so i'm just trying to remember what we do so we'll hit next step
and um i think what we need to do is initialize
a variable first because we're going to have to get some way to grab the name of the string because if we go
back to our run over here just give it a moment here
and we go into a run again here we need to extract some data to pass along because there are some limitations
in terms of how json gets passed along or data gets passed along and so what we want is we just want this part
of the name we want to say take this name as the identifier so that when we're copying stuff over it will work
and so what we'll have to do is store that into an intermediate variable so
we'll just type in variables here and i'm just seeing
oh yeah so they look like this because i can remember they might be in the built-in yeah that looks a lot better
and so we need a variable and it's initialize variable and we're going to name this file name
and this is a string of course and now we need to insert the value so in here what we need to do is write
an expression in order to extract that information out um so what we'll do is go to the
expression tab and over here you can see we have all sorts of expressions that we can use so i'm going to type in last
parentheses and then in there we'll do split parentheses
and then what we're looking for is trigger body and then we'll do question mark
square braces single quotations subject how did i know how to do that
i looked it up i looked it up somewhere and you know i just don't feel like
there's much reason to to teach this part because you mean if you really need to know you
can go here um and learn all about it but a lot of times like if you need something you can just say i need this
kind of function somebody's already done it right because there's so many common use cases so i probably search something
like how do i get the name out of the the thing you know like for the blob and somebody had that there but it makes
sense to me so let's hit okay here and it should turn purple because it is dime expression if you type it in here
probably won't work correctly you have to type in here and then hit okay so it shows up like that uh but you notice we
typed in like trigger body so if we go back over to our run here um
this is the body here so when they say trigger body they're talking about here and then it was just grabbing that
subject line there all right so that would be the second step and
that gets it into a variable but the next part is we need to actually um get the blob content and then insert it
and then create a new blob so we'll do is hit next step and we'll type in blob and see if we can find anything here
and from here we need to get the get blob content using path version 2. so i'm just going to scroll down here
and take a look for it there it is based on the path and we'll go down here and um
i guess it would be access key oh because we're setting up a connection for the first time
so [Music] enter name for connection
um i know what the storage account is but what is the connection uh connection
name i do not remember give me two seconds okay there wasn't much to help me here because what i remember before
was that you click it and you'd authenticate it like the event grid but it's not doing that so maybe we just
have to name it something so i don't know we'll just say azure storage account maybe it just wants a
name maybe it doesn't really matter oh yeah like there's sign in
that's what i want so connection name yeah so we'll say uh you know
storage account event grid okay because if we can just single sign on
let's do that that's super easy and we'll click that there okay so this is starting to look how i
expect it to look and so we need the storage account name i don't know why it's not showing me any
names here but that's okay we'll just go over back to our storage account here and it's
called event grid basics so we'll type in event grid basics event
grid basics uh that's custom value sure
i mean that's what its name is i'm not sure why it's not auto completing but here what we need to do is we need
to provide the path so it's going to be forward slash basic ah and so now there's our environment
very or that variable so we'll just click that there so that will make it super super easy
now notice that it is showing basic now so i just clicked here the folder we typed it in manually but we could have
clicked uh here and then put the environment variable in or the uh this initialized variable in here but i did
type that manually and it still did work correctly so we are okay here infer the content type sure why not
um it doesn't matter if they do so this gets the content so now this gets the path and so the next thing is
actually to create the blob you can't like do an easy clone you have to do at this um intermediate step
that's just how it works um and so what we'll need to do is go to our built-in ones here we'll type in
blob again maybe standard and this time we want to create a blob
so there it is uh block bob no we just want a blob and so what we'll do is
i guess we have to connect again i'm surprised it's not showing the name yeah it's just the name that's fine so we'll
go back over here i just don't want to type it wrong so we'll just copy paste it in
event grid basics grid basics enter custom value because it's giving
us so much trouble for no particular reason make sure there's no space on the end
there there now works fine um in this case what we want is basic alt
and the blob name can be the file name which is totally fine and the blob content will be the file
content and i don't think we need anything else so what we'll do is go ahead and click
off and we will save alright so that's just the way we're going to have to do it
so what we can do is go back to our overview and
we'll go back to our basic folder and we'll delete data say ok
and we'll go upload we'll select our file again we'll grab it
we will upload and then we will make our way back over to our logic app
close this tab here so we don't get too mixed up refresh the page
and it failed so it failed for some reason so something has not been configured correctly
it failed on the initialize variable so something's wrong there so unable to process the template language expression
in the actions initialize variable uh inputs at line zero column zero templating function split is not defined
as not valid so it's possible i just spelt it wrong so what we'll do is go back to our event
grid we'll go back to our logic app designer here initialize variable we will click it
and we probably just built it wrong spilt spilt what if we do lit spilt
split so if that's wrong we'll just scroll on down and we'll just take a look
i could have swore that it auto-completed for us oh you know what it is spelled wrong it
should be s-p-i-l-i-t split all right
and i'm just double-checking to see if there's any other problems here nope looks fine to me so go ahead and
say update we will save it in the top left corner we'll go back we'll delete our file here
we'll say okay and we'll have to select a new file we'll click open let's double check make
sure that's been saved it looks like it's been saved we'll hit upload we'll go back to our overview page
it's already running super fast by the way and we'll click into it and we'll see if
we get any other failure so there's another failure that's totally okay so we'll just expand it this request is
not authorized to perform this operation using the permissions so it does not like the permissions i
gave it totally fine so we will go back to our app designer
we will go to this second step here even though it did select this properly so we'll change the connection
i guess we'll add a new one so we did 80 integrated oh let's do managed identity you must enable managed
identities in the logic app to use managed dna's authentication you must grant
required access to the identity in the target resource okay
there is an identity tab so we can go over there and take a look there quickly i don't remember it being that hard to
do a system assigned managed identity is restricted to one person one person
resource is tied to lifecycle you can grant permissions etc etc um
can we just turn that on and hit save well it can be granted access resources protected by a z sure let's give it a go
all right so um it seems like we have to assign some role stuff so we can try and assign a role um
can we do the subscription level contributor okay
so there's a few different ways you can authenticate so hopefully this will be the easiest way to do it
we'll refresh here did it assign it and i don't think it said it all right so give me a moment and let me
see what i can figure up okay you know just to make this easier i think what we should do is just do the
access key because that seems like the easiest way to do it i was just hoping that we could have you know
just did a simple sign in here but it's not a big deal so we'll hit change connection we're going to add a new
connection just say storage account event
event grid key and so this is going to want the azure
storage account name so this one will be the name of the storage count if we can find it
it's called event grid basics and then we need the azure storage account access key
so there's probably a tab called keys yep and we will show the key and we will
copy the key if i don't have to pass long keys i like to not do that please check your account
info again storage account access key should be a correct base64 encoded string come on
give me a break here i am doing what you asked me to do so we will try this again
this thing just hates me today give me a second okay you know what it was really short so i
really don't trust it so let's just do i just cleared it out there i didn't do anything else what we're going to do is
go back here click the copy and then right click and paste that's so much longer okay
that has to be the right key we'll hit create and we'll give it a moment
okay great so that's for that one
um but this has to have the right connection as well so what we'll do is just change the connection
you have a few here eh um and the one we want is the one that's valid so we'll go this
one down below as you can see a few attempts here and we'll save it
and we'll go back to our overview here i'm just gonna close that tab out we're gonna close
this out we're gonna go back into here we're gonna go into our containers we're gonna go into our basic
we're gonna go ahead and delete this we'll say okay and we will upload a new file we will
choose the new file we'll choose data upload it we'll go back over here
and i want to see the latest run here we'll give this a refresh is it running
it looks like it's running it's hard because this one looks like it just failed and and now the the messaging is
getting really muddy here what is it doing so we'll click off here sometimes the portal is a bit funny is
it just triggering over and over again did we make an infinite loop uh oh
okay i think we have a problem here well if we go here is it basic or it's basic alt
this one's basic so what's the problem
we'll refresh failed why did it fail conflict
another active upload session exists please retry after some time okay
uh well let's just go take a look here go back it's here so it's it's here
so it clearly has worked why it's triggering multiple times i don't know um don't particularly like that
there we go let's go ahead and delete this one here and it's just it's just going over and
over and over again so there's something wrong with my workflow
so this looks fine to me that looks fine to me maybe it's triggering oh you know what
the the parameters out of here so this is supposed to have a prefix here so what's happening is that it's triggering
on any time a basic one is set up or a basic alt one and it's just stuck in an infinite loop which is really really bad
um so we did do this earlier but for whatever reason um
the changes still are not here so what we'll need to do is set up that prefix so what we'll do is type in
blob services default containers
basic because we really don't want to trigger it on any but that that container there
and uh did it save it it doesn't look yeah i mean it should be
there so what we'll do is go ahead like why is it not filtering oh
i guess it wants a filter based on name but we gave it its name so i'm not sure what else
we would have to type there okay um i'm just gonna put um dot
it's the prefix filter so data i guess i don't know like it's not letting me save okay there we
go we'll save that i just want to stop the infinite loop there for a moment so we'll go back over to the overview
and we'll just make sure we're not running up our bill here and i'm just refreshing i just want to
see that's not triggering anymore so it stopped triggering which is good and we'll go back over to here
and we'll look at this prefix filter because i i remember having to do this so a filter like sample etc etc so we'll
type it in again i guess blob services defaults containers
basic it's very odd because like we typed it oh you know what i probably did i typed
in the filter parameter here and we're supposed to add it then put it in there so it's just me getting confused by the
ui silly me okay so what we'll do is go back to the overview
and this time we just want to see trigger once so go back to basic all we'll go ahead and delete this we'll say
okay and we'll go back to our event grid we'll go in or
our event grid our basic our basic container we're going to go ahead and delete
uh data here again and we're going to go upload one more time
it's actually good that we had that problem because i got to show you uh why filters are so important um when
we're dealing with uh the app logic there um or logic apps so we'll go ahead and hit upload we'll go
make our way back over here we're gonna give this a refresh and now we have a new one and it only
happened once and that's what we wanted to happen so we go back over to here and go to basic halt there it is so
that's a means to which we can use event grid to integrate stuff you can see logic app is extremely useful for
developers building all sorts of tools but we are all done here
and what we'll do is make our way over to our resource group and we are going to just go ahead and
clean up so we'll go into event grid basics and we'll go ahead and delete this
resource group there we go and it's going to go ahead and delete there
uh yeah and there you go [Music] hey this is andrew brown from exam pro
and we are taking a look at azure event hub so event hub is an event in jet store that can consume millions of
events from anywhere and process them in real time or micro batching thanks to the auto inflate feature which
automatically scales the numbers of throughput to meet your changing needs so it provides an easy way to connect
with your apache applications and clients here is kind of a overview illustration of event hubs so the idea
over here is you have different protocols so h-p-s-a-m-q-p kafka for your event producers to then
uh enter into azure event hubs you have partitions within your hubs you have consumers uh and then there is the
received events the event receivers so you know hopefully that gives you a quick overview let's go dive a bit
deeper [Music] let us take a look at the pricing for
azure event hub but before we do i just want to expand on a few initialisms so we know what they stand for we have
capacity unit for cu processing unit for pu and throughput unit for tu so across here we have four different pricing
plans basic standard premium dedicated lots of azure services like to have a whole lot of
specialized plans uh built into their services uh but as you can see as uh you go right the more expensive it goes
uh but there's some key differences so for azure event hub if you're on basic you do not get capturing
capturing cost a bit more uh so i mean we'll cost something at standard at premium dedicated it's included if you
want to use apache kafka you're not going to get it in basic if you want a schema registry you're not going to get
that in basic if you need extended retention you do not get a basic or standard so hopefully that gives you an
idea of the differences of these tiers [Music] let's take a look at the key concepts a
lot of these will cover in future slides so this one is a bit text heavy so do not worry
but azure event hub helps you build your big data pipeline to analyze logging anomalies use anomalies users and a
device telemetry where you only pay for what you use so some of the key concepts here are name space so it is an endpoint
for receiving and distributing events to the event hubs we have event hub where your events will be delivered event hub
cluster which is a dedicated event hub with 99.99 sla event hub capture this allows you to automatically capture and
save streaming events we have uh event hubs for apache kafka
so this is compatibility with apache kafka's setup event publishers these are
applications or services that publish events to event hub publisher policy this is a unique id used to identify
publishers partitions are used to organize the sequence of events in event hub event consumers are applications or
services that read events from event hub consumer group which are the enable consuming applications to
each have a separate view of the event stream stream offset which holds the position of an event inside a partition
checkpoint is the process of distinguishing between read and unread events but let's go look at these all in
more detail [Music] let's talk about scaling because that's
a very important concept in vent hub so the idea is that there's this feature called auto inflate it's basically just
a check box here and will automatically scale up to the maximum to you so um
throughput units based on the traffic demand it's not available for basic pricing as we saw over there
we also this option of if we're in the premium tier you're going to notice there is no auto inflate because it's
already there by default and the unit is different it's processing units so you'll just slide that left and right
that's just kind of reflect what we saw in the pricing section okay
[Music] let's talk about an actual event hub so what we do is go name our hub and we got
a few options here we have partition count so partitions are our data organization mechanisms that
relates to downstream parallelism required and consuming applications sounds very fancy but it's just a way of
segmenting your data so that you have a faster concurrent reads you have retention message retention so
this is the period for which your events will retain so that means that when they're past that period they are poof
they are gone then you have capture so capture enables you to automatically deliver the streaming data in event hubs
to azure blob storage or azure data lakes store account of your choice with the ad flexibility of specifying a time
or size interval setting up capture is really fast there are no administrative costs to run it it scales automatically
automatically with event hubs throughput units event hubs captures is the easiest way to load streaming data into azure
and enables you to focus on data processing rather than on data capture so just a great way to offload into
azure blob storage or azure data lake store there you go [Music]
let's take a look at producers because that is what is going to emit events that will get into event hub so on the
right hand example we have some code because the idea is that you will have to instrument your producer to send out
events so here you can see we've created a client we'll have to have a connection string the event hub name so we know who
we're talking to you would create a producer and then here what we're seeing is a batch job so
they create a batch job and they push some data on there and then they send uh send uh send the batch and then when
they're done they close the client if there are any errors we can get some exception handling this is a javascript
example so a producer also known as a publisher emits data to the stream publishers can publish events using the
following protocol so https so most azure sdks use https as this would probably use underneath you have amqp
which is a very popular uh queuing protocol kafka protocol so if you're using kafka or you're using that
protocol then that's another way to get in here and we just walked through that example you can publish events either
one at a time or batches as we see here on the right hand side there's a limit of one megabyte regardless of whether
it's a single event or a batch beyond one megabyte events will be rejected for automation or for authorization
publishers use either azure ad with auth 2 issued jwt tokens or shared access signature sas which is something we see
a lot in azure services let's just talk about some other things so https versus amqp for publishing events for amqk it
requires the establishment of a persistent bi-directional socket in addition to tls or ssl
mqamqp has higher network costs when initializing the session amqp has higher performance or frequent publishers
and can achieve much lower latencies when used with asynchronous publishing code https requires additional tls
overhead for every request so it's going to be up to you to decide whether you want to use https
or amqp protocol for publisher policies event hub enables granular control over events publishers
through publisher policies and they these publisher policies are run time features designed to facilitate
large numbers of independent event publishers with publisher policies each publisher uses its own unique identifier
when publishing events to an event hub using the following mechanisms and i guess there was supposed to be more
there maybe but i think that's pretty much it because we don't really need to know much more
about publisher policies to be honest uh if there's an opportunity if we do follow
along that is opportunity maybe we will take a detour into that to see if it's worth our time but as far as i'm aware
of i don't think it's that important there we go [Music]
let us take a look at a consumer so here's an example of consumer code very similar
kind of similar to the last one here so we are uh creating a consumer client and so here we need a connection uh
connection string container name where we're gonna put this stuff looks like we're going to put in azure blob storage
we're going to create that client and then we have the subscription so the idea is that
it was going to listen for data to consume okay
and then over here we can see that it updates a checkpoint we'll talk about checkpoints soon enough but let's just
talk about consumers in general so consumer also known as a reader receives data to process from the stream all
event hub consumers connect via the amqp 1.0 session and events are delivered through the session as they become
available the client does not need to pull for data availability so that's that we'll next talk about consumer
groups [Music] well let's talk about consumer groups
here this is very tech savvy sorry but there's not much to show a consumer group is a view state
position or offset of an entire event hub consumer groups enable multiple consuming applications to each have a
separate view of the event stream and to read the stream independently at their own pace with their own offsets so in a
stream processing architecture each downstream application equates to a consumer
group there's always a default consumer group an event hub and you can create up to the maximum number of consumer groups
for the corresponding pricing tier there can be at most five concurrent readers on a partition per consumer group
however it's recommended there's only one active receiver on a partition per consumer group
some clients offered by azure sdk are intelligent consumer agents that automatically manage the details of
ensuring that each partition has a single reader and then all partitions of an event hub are being read from this
allows your code to focus on processing the events being read from the event hub so it can ignore many of the details of
the partition so there you go [Music] let's talk about offsets for azure event
hub so an offset is the position of an event within a partition and so here's an example a very ugly graphic from the
documentation but it makes our point clear the idea is you have a partition you have um
you have an event within the partition there's all these little lines represent event and that's where our offset is so
offsets enable a consumer or reader to specify a point in the event stream from which they want to begin reading events
you can specify the offset as a time stamp or as an offset value consumers are responsible for storing their own
offset values outside of the event hub service within a partition each event includes an
offset so there you go [Music] all right let's talk about checkpointing
uh there is really not a lot to say about it um or to visualize it but it's kind of part of offsets so checkpointing
is a process by which readers mark or commit their position within a partition event sequence checkpointing is the
responsibility of the consumer and occurs on a per partition basis with a consumer group responsibility means that
for each consumer group each partition reader must keep track of its current position in the event stream
and can inform the service when it considers the data stream complete [Music]
hey this is andrew brown from exam pro we're taking a look at schema registry something that is uh not uncommon when
we're looking at event buses there's something similar in aws um and the idea here is that you're able to enforce a
particular schema of your data and this will help avoid um uh things breaking for your consumers
and have a standard way of expectation of what something is so the idea here is skip registry provides
a centralized repository for schemas this provides the flexibility for your producer consumer applications to
exchange data without having to manage and share schemas between them and also evolve at different rates that's the key
thing there evolve at different rates so here we have our schema group we have a schema we'll talk about schema group
in a second and then you have schema versions and then there's the actual schema itself so schema groups are under
your namespace and can be accessed by all topics event hubs under that namespace so here you can see we're just
setting some options here and then here's the schema itself so here we have a very simple example where we have a
type of record and name ordered and we have a couple of fields but the idea again is schema registry
really does help you kind of enforce what you expect data to look like for your producers for your
consumers and there you go [Music] hey this is andrew brown from exam pro
and we are taking a look at apache kafka so the reason we're talking about it is just because
event hub has a compatibility mode for it so apache kafka is an open source streaming platform to create high
performance data pipelines streaming analytics data integrations and mission critical applications kafka was
originally developed by linkedin in open source in 2011. kafka was written in scala and java so to use it you'll need
to know how to write java and the idea is quite simple we have producers we have consumers we have topics there's a
cluster there's partitions looks similar to event hub and other kind of streaming platforms and kafka data is stored in
partitions of course that span multiple machines because it's for distributed computing you have producers
via the kafka producer api you have consumers via the consumer api messages are organized into topics producers will
push messages to topics consumers will listen on topics so let's talk about that kafka compatibility so event hub
provides an endpoint compatible with apache kafka producers and consumers apis for version one and above so
event hub kafka compatibility is alternative to running your own apache cluster so
you know we said that they're very similar so eventhough calls them name space kafka calls them uh clusters so
you can kind of see that there's that translation over but the idea is that when you enable this it will use all the
stuff that kafka is used to seeing and then the idea there is that you don't have to run a kafka cluster you
just run event hub if you've already written code that works with kafka that's the idea is that
you can drop in event hub as a managed replacement now does event hub work exactly like kafka i'm
not a hundred percent sure so there could be some feature differences um but in terms of uh
protocols or endpoints being able to replace kafka with a vent hub if you need to um that's why they have this
functionality here so there you go hey this is andrew brown from exam pro and we are looking at partitioning or
partitions for event hub so vent hub organizes sequences events sent to an event hub into one or more partitions as
new events arrive they're added to the end of the sequence this idea is that you have those partitions older's on the
left mirrors on the right partitions hold the following data about the event the body of the event a user-defined
property bag describing the event metadata such as its offset its partition its number in the stream
sequence service side timestamps at which it was accepted partitioning allows for
multiple parallel logs to be used for the same event hub and therefore multiplying the available raw
input output throughput capacity you can use a partition key to map incoming event data into specific partitions for
the purpose of data organization and the partition key is a sender supplied value passed into an event hub so there you go
[Music] hey it's andrew brown from exam pro we're taking a look at event retention
for event hub so published events are removed from an event hub based on a configurable time based retention policy
and the default value and shortest possible retention period is one day so that's 24 hours for event hubs standard
the maximum retention is seven days and we saw that um in the pricing page where retention can be different based on
those things so for premium and dedicated the maximum retention is 90 days if you change the retention period
it applies to all messages including messages that are already in event hub you cannot explicitly delete events the
reason for event hubs limit on data retention based on time is to prevent large volumes of historical customer
data getting trapped in a deep store that's only indexed by tam's timestamp and only allows for sequential access
which doesn't sound too good if you need to archive events beyond the allowed
retention period you can automatically have them stored in azure storage or data lake by turning on event hubs
capture feature which we talked about earlier if you need to search or analyze such deep archives you can easily import
them into azure synapse or other similar stores and analytics platforms so there you go
[Music] hey this is andrew brown from exam pro and this follow along we are going to
learn all about azure event hub uh so what i want you to do is go to the top here and type in event hub and we are
going to create ourselves a new event hub namespace so we'll go here and hit
create and we'll create a new resource group as we always do called my event hub
and then for the namespace name we'll say my event hub if it doesn't let you
do that you'll have to put some numbers on the end i'm just putting some numbers here because often these are taken up
doesn't matter what location is just choose whichever one we're going with basic because there's not a huge
difference between the pricing terms in terms of feature sets that we want to use today so go ahead and create this
namespace so we'll give it a moment and we'll go ahead and hit create and then we'll just wait for this
namespace to provision all right so after waiting about a couple of minutes there our namespace is
deployed we're going to go ahead and create ourselves an event hub i'm going to call this one
my hub we'll go ahead and hit review and create and we'll create
and these crate very very quickly so we're not going to have to wait too long we need to set some shared access
policies here so i'm going to go into the hub here i'm going to hit manage i'm going to call this
my sas sas shared access policy and there we will now have the ability to have a
primary key and connection key so we can actually connect to it so what we're going to do is go to
github and i want to create a new repository and we'll go ahead here
and we'll go down below go to exam pro we'll say my event hub
that's already taken because i've done this before so for mine i'm going to call it new
and we'll go down to private we're going to use node.js so let's type in node.js so that it ignores the node modules
you're going to want to have getpod or a visual studio code installed on your computer the easiest way is to get pod
because these environments are temporary and it's free to utilize it so if you can go get the chrome extension
or if you don't want to install the chrome extension all you got to do is attach
this to the end of the repo to launch a github or get pod environment so give that a moment to launch
and there we go so i do have some code uh for this so i'm just looking for it off screen
here and we're gonna need to have a couple files
here we're going to need a new file called send.js
and we're getting a new file here called receive.js i'm not typing the full word receive
because i'm always really bad at spelling it so i'm just trying to save myself some trouble
and we're also going to have to initialize a new
package.json file so there we go and we're going to have to get a couple things installed yeah so we'll
need mpm install azure event
event hubs azure storage blob
azure event hubs checkpoint store blob
save dev and make our lives a bit easier it seems like i typed something wrong here i'll just hit up
i forgot the forward slash here and so we'll just go event hub azure
javascript because i believe yep i kind of use this one but i modified it
to make it a little bit easier i think this is the one i was doing no i don't think so tutorial
might be this one yeah it looks like this um so this is just the javascript there so for send we
will grab this code here we're not going to do exactly the way they do it but pretty close
and then there is a receive code so go down below and we'll grab this as you can see there's a lot going on here
that will be our receive a couple things we need to properly set so these will be all environment variables so what we'll
do is go to the top here and we'll just do constant process
requires process that's going to allow us to import our environment variables so we'll just say
copy let's save that we'll paste that in there as well and then this is where we need to
replace all environment variables as you can see i always have things pinging up on me here so just close my teams out
and just line these to make this a little bit faster so default will be what we'll keep here
this will be our storage connection string say process env
storage connection string then this one will be
process env container name then we need
process env event hub name and up here we'll have process
env event hub connection string
and then we'll go over to our send here we'll do something very similar so process
env event hub connection string and then we'll have process env
event hub name so we need to set all of these i'm just
going to copy this for a moment here i'm going to make a new file file new file
sure it doesn't really matter we're just using this as a quick scratch pad and so what i want to do here is just
delete out this part and we're going to do export on the end here we'll take out this one here
and the idea is that we'll just set them all here and then we will make our lives a little bit easier when
we have to mask something set these so we do have the connection string
because we saw it over here so we will grab the primary one it doesn't matter which one primary
secondary and that is for the event hub
we called the the event hub was called my hub i believe
we'll just double check what the hub was called yeah it's called my hub up there
we'll need a storage account so what i'm going to do push this thing we get out of here get
out of here i'm not i'm not trying to save a freaking file there we go hit escape a bunch of times
and we'll go back here and we'll create ourselves a new storage account i'll actually make this in a new tab so we
can see what we're doing so we'll go over to storage accounts and we'll create ourselves a new storage
account we'll create this storage account in the same name space so we'll go down to my event hub we'll just say
my or we'll just say my event hub 8888 again you might have to change it based
on your standards but um or like what is available to you we'll go ahead and go review
create and for the container name we'll probably just call container
maybe container one we just have to wait for this crate to grab that connection string
so this usually doesn't take too long just a couple seconds okay so there we are we'll go over to
access keys and we'll grab the connection string from here i believe this one should work
uh let me just double check yeah i think this will work so what we'll do is go back over here if it
doesn't we'll find out pretty soon and we'll just generate out a shared policy uh we'll go ahead and paste that on in
here and just double checking that this is the correct one this looks identical to
that one that can't be correct so we'll go back here this is the storage account so we'll go oh i have to actually hit
the copy button that's what it didn't do we'll go ahead and paste that in and so theoretically this should work so we'll
go ahead and copy these we'll drag our terminal up a bit we'll paste these in here
and what i'll do is just double check that they're here so we'll say env grep
event hub so those are both set and then we'll do storage
that one is set and we'll do container that one is set so these are all in good shape for our storage account we still
have to create the container so go here and create a new container say container one we'll go ahead and create that we'll
make our way back over here and then instead of just having export we'll do gp
env this is just in case we have to restart the environment for any reason so that these environment variables get
exported twice so we'll paste that in there i believe those are all set i had to hit
enter on the last one there and let's see if our code works so we'll do a send
um actually we have to set up two scripts here so that we can actually
call them so one called here is called send this will be node send js
and we'll have receive so rec just because i always spell receive wrong and i just don't want to have to type it a
thousand times we'll have that there so now what we'll do is do npm run send see if that works
it says a batch of three events have been sent we'll go and confirm that over an event hub if it worked if we go to
the overview it should show us some messages were received sometimes there's a bit of a delay
so we'll just give it a teeny tiny amount of time we'll hit refresh here because we know we sent them but while
we're waiting for those to kind of propagate what we'll go back here is just kind of look at the code because we
didn't really look at it so the way it works is you are defining a client and it will be the producer
client um and then down below so we say okay a producer is someone that produces events
it's very common in a messaging system to have a producer and consumer we're going to create a batch job and we're
going to add them all to the batch job then we're going to send them all at once and it's going to close and it'll
enable to complete and if there are any errors it will alert us about it so we'll go back over here and we'll do
a refresh so i want to see messages messages would normally show up
here so since i don't trust it i'm just going to run it again i mean clearly worked because there was no errors
and we'll go back over here and not here but we'll go back here we'll refresh
and i'm just waiting to see something here processing data this is like something
that's really powerful with them event hub here so still don't see the messages just give
me a second to debug this i've done this lab like four times so it should work but uh you know sometimes sometimes it's
trouble so just a moment all right so i literally did nothing and now it's actually showing up in the uh
messaging queue so you know or the hub the funnel so that's just something you have to
consider is that sometimes you just have to be a little bit patient let's see if we can go receive those messages now by
running the other scripts so what i'm going to do here is do npm
run receive and it should receive the messages as long as something isn't typed
incorrectly so we'll go back over here we've seemed to introduce a little mistake so i'll go ahead and save that
we'll hit up and it should receive the events so i should print the three out
so there we go and so we are streaming this or the consumer is technically um storage
storage accounts but if we go to the storage account there's nothing really uh um
intelligible in terms of what's in here so like there are stuff in here checkpoints
i guess it saved a checkpoint i personally don't know what i'm looking at so i'm not exactly sure what the
point of doing that i guess it's just saying the checkpoint is like the last point it wrote
but if we just take a look at the code here quickly you can see it's called consumer so we get a consumer client
there's a blob checkpoint store then we have the consumer client we are subscribing so it's saying hey are there
any events let's consume them if there are no events throw a console log so tell us about it iterate through them
then update the checkpoints so move it to the next point um just say hey this is where it is now and that's pretty
much all i want you to do we can go ahead and save this code we'll say eventhub code doesn't matter what you
name it there we'll sync the changes and we'll go ahead and clean up so we'll go back to our resource groups
we'll go to event hub and we will then go and delete this resource group
and we'll go here and there you go [Music]
hey this is andrew brown from exam pro and we are taking a look at azure notification hubs so the goal of the
service is to send notifications to mobile apps in order to increase the app engagement and usage you can use tags to
filter which apps or users that will receive notifications and those apps can be registered in azure notification hub
in two different ways the first way is installation model so that's where the app sends an
installation request to azure notification hub including all the information needed for the installation
to be completed this model is easy to set up and maintain and then from there we have the
registration model uh so this app sends a azure notification hub registration request with the pns handlers and tags
and azure notification hub responds with the registration id uh it's great for native or generic templates that can be
used in order to define the message so um you know azure notification hub in order for us to
actually uh show off how to use this thing we'd actually have to set up a mobile app and it's not easy to do so
this is something where we do not have a follow along for i don't think it's a big deal because i really didn't see it
on the exam but it is part of the material so we definitely should know what it does so we'll just continue on
here and learn a little bit more about azure notification hub okay [Music]
all right let's talk about the supported platforms that azure notification hub supports this gives you an idea of how
how it can be used with different uh mobile frameworks uh because you know we all don't use iphones there's androids
and other things like that so for android we're gonna see support through firebase cloud messaging if you do not
know what firebase is it's google's cloud's platform as a service it's a way of building out um
applications similar to uh it was amplify or supabase but uh the component for messaging is through that sdk for
cloud messaging this used to be called google cloud messaging so if you go through the documentation
you might see this this does not exist anymore you have to use firebase cloud messaging okay for ios we have the apple
push notification service apns and so through that it's going to be notification hubs for ios 13. and uh the
way that works is through objective c sdk um i don't think there's support for swift at least i could not find it but
this is only important if you are a native mobile application developer but basically the idea is the objective
sdk is how you would interact with it there's also notification up for zamarin i always say zamarin wrong for
ios applications samarin is i'm assuming the x is pronounced like an s i don't know why i just assume but um
that is a uh a technology a framework built by uh
microsoft that allows you to build out native applications for a variety of things like ios and android and things
like that and windows phone of course for windows phone the way we're going to communicate is through the microsoft
push notification service so mpns for windows because i suppose you can build a windows application not just a
windows phone this is through the universal windows platform so uwp app uh and from there that is the windows push
notification service of wns so you know this is just to give you an idea of the broad stuff and get you exposure to a
lot of these terms again i don't think that's going to show up in the exam but good to know for azure notification hub
okay [Music] all right let's take a look at how
message flow works for azure notification hub so notice this tower here so this is um the platform
notification service by your provider so you're using apple they have their own microsoft has their own et cetera et
cetera and the idea is that notification hub is connected to that and that's how notification gets pushed out so
notification i was just pushing to this tower and then this tower
pushes out to tablets that are subscribed okay let's go through the flow here because
there's some interesting things about handles we need to know so the idea is that your app says hey i want to
subscribe to push to push notifications so it sends a request to your platform notification
service and it's going to get back a handle that handle's going to vary so if you're using windows uh
notification service you get back a url a uri and for apples you're going to get back a token so then you have to store
that token into your application so it comes back to your phone your tablet whatever and then it goes okay to the
back end and then stores that token here and then the idea is that you can then use that token that whenever your app
wants to notify all devices that are uh registered for push notifications it's going to
then push it out and with the handle it's going to go to the platform notification service and
then that's going to roll it out and then all your apps are going to receive that information so hopefully
that makes it uh pretty clear how that works okay [Music]
all right let's take a look at some key concepts or components that are in azure notification hub to just kind of cement
what we learned about the message flow and just the service in general so azure notification has these key concepts we
have routing this is the process of selecting which registrations will receive the push notifications we have
the id of broadcast all registrations will receive the push notification a tag registrations that contain the specific
tag receive that push notification a great way to narrow down who you want to notify there
are tag expressions so registrations that match the tag expression will receive a push notification so that is
an even further way of uh narrowing down who we want to uh receive a push notification there's
the notification hub name space so you have to create a group of one or more notification hubs in order to send stuff
out tags are used to label applications so that they can be routed more efficient efficiently which is what we
were talking about here just a moment ago but tag and tag expressions you have templates these define the format on how
the client will receive the push notification and it's platform specific um okay
each platform is this is platform independent so platform specific i think is probably a better word there
each platform has its own platform notification service which is in charge of push notifications so the idea is
that uh push notification hub or azure notification hub is just basically an adapter to all these a variety of
services that are offered by these other uh telecom or notification companies okay
[Music] hey this is andrew brown from exam pro and we are taking a look at queuing so
both azure storage queue and azure service bus have overlapping responsibilities because they're they're
both messaging or queuing as a service and so just before we jump into both of these services i wanted to
distinguish the differences between these so you can kind of identify the use case for them so we have azure
storage queue this is a simple message queuing services store large numbers of messages then you have azure service bus
this is for broader messaging service that uh supports cueing pub sub and more advanced integration patterns so let's
talk about the storage queue side of it so this can handle millions of messages there's no guarantee of the order of
messages there's at least once delivery which is really nice 500 terabytes max q size 64 kilobyte message uh kilobyte
yeah kilobyte message um and 48 kilobytes if you're doing base64 encoding so that's the size it can be
after the encoding unlimited queues unlimited concurrent clients uh at least based access mode so we got 30 seconds
to seven days uh set for the entire queue and then on the right-hand side looking
after service bus we it can offer first and first out which is uh very useful guarantees the order of messaging be
being consumed that's what a first in first out means at least once or at most once delivery um we have one gigabyte to
80 gigabytes of max q size 256 kilobytes to one megabyte so a lot larger than azure storage queue we only have a
maximum of 10 000 max queues but that is a lot of cues i don't know what you're doing with ten more than ten thousand
five thousand concurrent clients uh lock based access mode and that is um with 60 seconds can change settings per message
and it has dead letter uh support which is very useful state managed support message group support uh deduping
support purging uh queue support and transaction supported so um you know the thing is is that
it looks like service bus wins in all categories but again remember that storage queue is simple and but because
it's simple by design it can handle millions of messages and if you don't care about first and first out it makes
sense um so you really have to decide in your use case there probably is a pricing difference which we i think that
we do cover in this course but you know hopefully you get an idea that these two services do the same thing but in
different ways okay [Music] hey this is andrew brown from exam pro
and we are taking a look at azure queue storage which is a simple uh message queue that allows you to exchange cloud
messages it's used for application integration between services and it allows that via an authenticated https
endpoint or protocol and it can accept messages of 64 kilobytes in size so the q storage is a service or a product
underneath the azure storage account so you first have to create a storage account uh and then from there you
create your queue but you'll be using the same access keys and connection strings like the rest of the other
storage account resources there are three ways of handling messages for azure storage
azure queue storage the first is peak and as it says it will retrieve or look at a message in the queue without
removing or locking it you have delete so that will delete a message that is being processed and received so
that is a message in the front of the queue uh but it gets locked temporarily while it is being used okay um and just
to kind of give you a visual of how simple it is to use this queue you can uh in
the ui in the azure portal you can open it up write a message set an expiry and just say okay
but the way you're generally going to be using it is not through uh the ui because these are for
application integration that means you're going to be writing either sdk code or utilizing the
cli uh and i believe that azure queue storage is uh one of the ones where you can actually use cli we do cover that as
the difference between um the other queueing service that you can use in azure um but here's an example of the
azure sdk it's a python example so just getting my pen tool out here it is a bit brief but the idea is that we have our q
client we have our connection string url which we're getting from our storage account we're putting the name of the
queue and then we just send a message so it's that simple and that is a python example let's just talk about some of
the components within a queue there's not a lot here so it's not too complicated but the idea
is you have a queue this could be accessed by using a url format of the following so you'll have your storage
account here qcor windows.net and then the queue name um would this show up in the exam
probably not but it's generally good to know just in case because azure exams are very tricky
and then we have also this url here as an example so i think it's reflecting the example on
the right hand side here so the idea here is that we have my accounts this is my account and then we have images to
download so images to download would be one particular queue okay so just to kind of hit home
that url there we have a storage account so this is required
in order for us to make the queue as we mentioned you have the queue itself this contains a set of messages the queue
names must be in all lower case that's just one of those uh interesting um one of those
interesting use cases or edge cases messages can be up to 64 kilobytes in format so it's very small so this is
where you might want to use the other queueing service uh within azure which i can't remember but we do cover it when
we get to that section uh and there is some um versioning that's a bit different so if you're using uh before
2017 the ttl on it is a max of seven days if you're using uh after 27 2017 um it can be any positive number or
negative one indicating that they don't expire and it doesn't say um i guess it can be whatever number you want to be i
don't know what the maximum is but i mean what's important to know is that the old one gives you seven days so
you're going to be wanting to use the latest version to have the most flexibility the default time to live is
seven days if the parameter is not specified which is totally fine let's take a look at some of the cli commands
because uh you know what i'm making this course is specifically for the az204 uh which means developers so you're getting
lots of hands-on stuff so you should really know these cli commands we do cover them in the follow along
here but the idea is we have uh the az storage because we're using storage count message
actions so that's going to be automatically for the queue so we have clear so delete all messages
from the specified queue uh delete to delete a specific message get so retrieve one or remember messages
from the front of the queue peak so retrieve one or more messages from the front of the queue but do not
remove them do not change their visibility do not lock them put so add new messages to the back of the message
queue update updates the visibility time of a message so these systems are very very
very simple but they're also highly scalable which is really great um but yeah there you go
[Music] hey this is andrew brown from exam pro and in this follow along we're going to
learn all about storage queues so what i want you to do is log into the azure portal and at the top here type in q and
so that's going to show you it's going to bring you to storage accounts because it is not a separate service it's uh
underneath storage accounts so what we'll do here is we'll create ourselves a new storage account
so we'll hit new or create there give it a moment to load and we're going to create ourselves a new namespace i'm
going to call this one az204 um az204 azure storage
account uh actually we'll just say um storage account queuing or queue
we'll say okay there this will be called uh az204 storage account queue
and it doesn't like the hyphens maybe there we go and we'll let it launch in
whatever region it wants to here it's going to be e east u.s for performance we'll stick
with standard we'll stick with a redundancy of geo-redundant storage it's not going to matter for those options
for using the queue we'll go to advanced to see if there's anything interesting there
nothing in particular i was just checking if there's anything for the queue so we'll go ahead and hit review
and create we'll scroll on down and hit create and we'll give that a moment to create
so uh in order to utilize the queue there isn't much in
the ui that we can do in the portal and so we are going to need to utilize the azure cli
so um you know there's a few ways we can go about this um we could launch up visual
studio code but since it's all cli driven what i'm going to do is go ahead and actually open up cloud shell and
that's the way we're going to interact with it now this is your first time spinning up cloud shell it might ask you
to create a storage account so just let it do that and now this resource is complete we'll
click into here and on the left hand side we're going to make our way over to cues
and i'm going to create a new queue i'm going to call that my cue and we'll hit ok
so for this you know there wasn't really a follow
along in the microsoft documents so i just kind of made my own and so for that information i have a
repository and you'll find this throughout the course
which is call i'm just looking up here on the side here the developer associate so some of these
are reworkings of microsoft ones or they're my own creations so this one in particular which is azure storage queue
is a bit more unique and so the things that we're going to need to do is we're you're going to need to have a
connection string in order for us via the cli to connect it because all the commands we're using like
put peak get all these things are going to rely on it in terms of what we want to run i'm going to type in azure
q message cli because this is mostly what you'll want to know how to use is how to
interact with messages there is a bunch of different cli commands um for azure storage like working with the
queue and stuff like that but i'd rather just work with messages because that's the only part that we can't do here and
that's where we need to understand the difference between peak get put and things like that so
the first thing we'll do is get a connection string url and i believe that is for the queue so we'll go into queue
here click into the queue because sometimes it is the access key that is um at the storage
account level and sometimes like if you go here sometimes it is so there's keys here sometimes it's the
cue so i think it is uh the q key that we want uh and
if it isn't we'll find out very shortly here if we made a mistake so what we'll do is we'll go back to our resources
here we'll click into the queue and in the queue again and we'll look at access policies and so we can add a policy here
nope you know what it was it absolutely is the storage account that we need so we go
down here and we type in access and we have access keys so here it is and between different
azure services sometimes this is called access keys and sometimes it's called something else we're going to discover
that as we do other follows just understand this is not always consistent based on
the ui but what we need is a connection string so we'll click show up here
and what we'll do is grab this string and so what i want to do is type in export
q connection string equals and then double quotations very important that you put those double
quotations because if you don't this equals will terminate here and it will only think
this is a part that is set here so i'm going to hit enter and so that's going to set us an environment variable uh for
this so if i did env hyphen grep and put in this this is how we can check if it was set
correctly okay if you've never t have you ever done this if you type in env that shows
all of your environment variables and so all grep is doing is saying find the one that's called this i could even do it
like this right partially and it would return it here so that's one thing the next thing
is we need the queue name so we named it i believe my queue so if we go back to our storage accounts here
and let's just click back on here such a pain sometimes clicking around here
we'll go to cues and we named it my cue so i'm going to set export q name equals and we don't need double quotations here
but i'm going to do it anyway my q will make sure that is set correctly so we'll type in grep
cue name and there it is so some of the first commands we'll want to use is things like put because put
will allow us to enter stuff in so we said that there are these messages here so we'll go to the put command here you
can see there's a lot of options when you want to know what you need to put in all i do is just copy the base thing and
paste it in and then it'll just tell us so i'll just give it a moment here
and so complains it says we need um the queue name we need the content
uh and i already know that we we need the connection string url even though it's not saying it there because it
might say hey you need actually down here it says connection string account name things like that
so um just to make this easier i'm gonna go open editor here and this is a terrible terrible editor but we're
working with uh like just some text here it makes our lives a little bit easier so what i want to do is just type it up
here before we paste it and of course this is all the repository but i'm just doing it
by hand and i recommend that you do it by hand like i'm doing so that you have a better chance to commit to memory so
we type in azure storage message message put i'm going to do a backslash so we can do multi-line here
we do connection string equals and we set that as an environment variable called uh
q connection string when we do this dollar sign that allows us to access the
environment variable that we said earlier and we did that is because we have to
run a series of commands and it was just a lot easier to pass this in consistently as opposed to passing in
that raw string every time so we need to provide the queue name so this would be q underscore name
and then we need to provide it content so we'll say hyphen hyphen content equals and i'm gonna say hello world
and so we will copy the contents there i'm going to just type clear here so i can see what i'm doing
we'll go ahead and paste that on in there we'll hit enter and uh yeah so it is now in the queue
now one thing that i was not able to figure out was how do you observe how many questions are in the queue i'm not
exactly sure so if we go into here i'm not sure if we get any statistics oh well we have a message right here so
we can see there is one in the queue but what i was trying to say is that when i was using the cli i was trying to find a
cli command uh to find that and i just never did but it's nice to see that we can see it visible here
in the actual console here i want to have more than one message here because i want to show you
how visibility works and so what we'll do is just paste command v there is no if you notice
right click there's no paste here which is kind of weird and so i'm going to do here hello mars
as our second one and we'll copy this here copy it paste it on below hit enter
and so now we have two queued up if we refresh we can see we have two messages in our cube we can even add a message
here if we wanted to so we could say hello jupiter and hit uh
[Music] maybe i have to scroll down here oh no it vanished i gotta do it again
maybe bring it down all the way down here add message there we go hello jupiter we'll say okay
and notice that we have some options there where we can set the expires in x amount of time or whether it never
expires and things like that but anyway what we'll do is um now we should probably attempt to peak
or get so what we'll do is go back to the top here because i think they list them out so
we have guests so retrieves one or more message from the front of the queue and then you have peaks it retrieves one
armor message from the front of the cube but does not alter the visibility of the message so when we say visibility the
idea is that when somebody accesses a message they're now looking at it right and so
nobody else can read it and the reason why that happens is to avoid people two people doing the same job by accident
but if we use peak that means that um it doesn't become unvisible um it will just remain the same so let's
do peak first and so what we'll do is we'll just copy this here
and paste this in oh we have to control v there and we'll just get rid of the content
line because we don't need to pass content we still need the cue we still need the connection string
and i'm going to change this to peak and so what peak should do is it should return us what's on the front of the
queue so this is the top up here we'll go ahead and copy it then we'll right click and paste down below we'll
hit enter and so we get hello world if we were to hit up and run it again we would still get hello world because it
has not been marked as uh invisible right it's still visible so now let's go ahead and change this to
get oh we got to paste ctrl v so we do get here
and copy can control c or command c depending on what
you're using and we'll paste that in there and hit enter and so now it says hello world as it did
before both times we ran hello world it was always returning the same one okay notice it says dq count is one so
we'll go here refresh and notice that hello world is not appearing now
right so it's because it's not visible it's still there it's just not visible right
um and so the next thing we can do is if we hit enter again
now types in hello mars okay and uh the next thing we can do here is
uh clear the queue if we want so that's probably something that we should do so if we were to go here and
copy this type in clear and hit enter
that should clear the entire queue so hopefully that gives you an idea of how the queueing system works
of course you can use the sdk so like if we were to type in azure storage queue
documentation and we were to go to tutorials there's probably samples there's probably like
examples i always go to javascript because it's the easiest one to read and if we go down to messages here
send a message not really great that they don't have this in line but the way it works is you
probably import the library at the top so you'd install with npm install azure storage queue you have the client the
credentials and then down below here you're creating your message i guess you have to create the client
queue first up here and then you pass along your message sent and then it says whether it's been
sent or not here you can see that they're using peak right very similar receive message all
very similar here so hopefully that gives you kind of an idea i think like forget or
sorry like um notice like it's called receive message as opposed to get so in the cli
uh we did get but this is really receive right so just so that translates over these queues are extremely simple but
they're designed to be extremely simple when we look at service bus you'll see that it's a lot more complex but one of
the key differences at least pragmatically with service bus and storage queue is that service bus
allows you to use the cli to insert messages where when we do service bus we have to write code using the sdk and we
will do that but yeah that's all there is really to this one that i want to show you what
we'll do is make our way over to our resource groups i'm going to just close off our shell
here we're going to look for the one we just created here so this is azure storage account cue
we'll go ahead and delete that delete and it should all be good of course
always double check five minutes from now make sure that stuff is deleted because azure is notorious for sometimes
not deleting things or you think that things are being deleted but this won't show up immediately it will take some
time to propagate through their servers but there you go [Music]
hey this is andrew brown from exam pro we're looking at azure service bus so azure service bus is a fully managed
enterprise message broker that allows you to publish and subscribe to topics and queue messages it can scale with
your applications with asynchronous messages and built-in integration with azure services so here is an example
of a service bus and so you know the way it works is you have producers you have consumers and so producers are sending
messages or events into an event bus the event bus is designed to broker that information so
pass it along to the ones that are subscribed so these applications which are consuming are subscribed to specific
messages and they pull data to um get them into those and target end applications some things we need to know
is you can handle single or batch messaging load balancing messages topic subscriptions manage uh managed sessions
and transactions with guarantees that it compiles with industry standards and protocols like the advanced message
queueing protocol so amqp java message services so jms and
other protocols there so they're just saying like yeah if you're already using very particular protocols you can
integrate with those let's talk about key concepts for service bus so the idea is that you have a namespace which works
like a server with nqs and topics you have cues which contain the messages you have senders who send the messages
receivers who receive the messages topic which is a queue with multiple receivers that work like a queue a
subscription is a receiver in a topic a batch is a group of messages a safe batch validates if each message
can be included in the batch sessions allow you to use first and first out and group your messages in a
queue peak returns a message to the queue without removing it dead letter q a
queue for messages that were unable to be delivered through the normal queue peak and lock so this retrieves a
message from the queue without removing it and locks it so other receivers cannot receive it receive and delete so
receives and deletes messages from the queue auto delete on idle so sets a time span
to delete the queue if not used duplicate deletion detection history so checks if the message was not sent
earlier before sending it a message so there's a lot of stuff here but don't worry we're going to kind of
cover it the most important stuff here in the upcoming slides and in the fall long okay
[Music] let us talk about namespaces for azure service but so simply put namespace is a
container for all messaging components so cues and topics but there's uh you know a few options here i don't have a
visual but there's some text we have to go through here to understand some of the things that
namespaces do so a single namespace can contain multiple cues and topics and namespaces are often used as application
containers so a service bus namespace is a capacity slice of a large cluster made up of a dozen of all active virtual
machines that you control it may optionally span three azs or availability zones in azure
you get all of the the benefits of running the message broker at massive scale in terms of
availability and robustness a service bus is serverless messaging so you don't need to worry about the underlying
complexity which is kind of interesting because they're mentioning about spanning across availability zones and
virtual machines but uh you know it's just a component that's you know you don't have to worry
about the management of it but you do need to know a little bit about how it works underneath but there you go
[Music] hey this is andrew brown from exam pro and we are taking a look at cues for
azure service bus so there's two types of uh messaging systems in azure service bus why they didn't make these two
separate services i have no idea but one is for messaging and one is for pub sub and and the one that is for
messaging is called cues so cues are used to send and receive messages messages are stored in queues until the
receiving application is ready to accept and process them messages and cues are ordered and timestamped on arrival once
accepted by the broker the message is always held durably in triple redundant storage across uh multiple azs if
its namespace is zone enabled so if you don't enable zone enabled at the name space level you're not going to get that
kind of durability or redundancy service bus never leaves messages in memory or volatile storage after they've been
reported to the client is accepted and messages are delivered in pull mode
only delivering messages when requested that's the key difference between a queue and pub sub cue uh the receiver
you know the application has to pull say hey are there any messages in the queue and that's how they get them in a pub
sub model they just get it sent to them like a subscription like a newsletter
subscription to your door let's take a look at the queue configuration itself so you can see we can set the delete max
delivery count things like that the queue size important things to note is the ttl so
how long a message will stay in the cube before it's removed from the queue or removed or moved to dead letter q
you can enable dead lettering down below you can enable partitioning here you can set the lock duration very similar to
what we'll see for uh pub sub i think at the subscription level but there you go [Music]
hey this is andrew brown and we are taking a look at topics so topics can be used to send and receive
messages and the idea here is you have a queue which is often used to facilitate point-to-point communication
and topics are useful in publish to subscribe so one-to-many communication so if you want one-to-one communication
use cues if you want pub sub then you use topics and both these options are available for azure service bus so
topics are not available at the basic pricing tier you need to have standard or premium so if you're trying to find
that option for pub sub that's where we're going to have it remember that we said that
queues use a poll model well topics use a push model so you don't have to request it just gets pushed out to you
because they're subscribed to it like having a newsletter delivered to your door so multiple independent
subscriptions can be attached to a topic and work in the same way as cues from the receiver's side a subscriber to a
topic can receive a copy of each message sent to that topic subscriptions are named as entities and so here's an
example of sender and receiver you can define rules on a subscription a subscription rule has a filter that
specifies a condition for a message to be copied into the subscription as well as an optional action that modifies
message metadata so let's take a look at what the form is for topic so the idea is you
name your topic you have a max size between one to five gigabytes you can have a time to live so after that period
of time those those messages or event data is just goes poof so it's just gone you can
avoid duplication by turning on enabled duplication or duplicate detection you can enable partitioning and partitioning
is intended for scale so if you have a lot of messages you might need to partition
based on your volume okay [Music] let's take a quick look at subscriptions
for azure service bus so i just pulled up the form here because a subscription is just the um
the application that is receiving that information so you just define one in the interface here but there are some
values that i want to point out such as max delivery count which can be between one to two thousand uh this managed
session so with sessions enabled the subscription can guarantee first in first out so fifo delivery messages so
that is where that gets turned on you can also enable dead lettering here um and you can have a lock duration and
adjust that there which you know i didn't highlight here but it's here there but we will see these things a
little bit closer when we do the fall longs but yeah a subscription is you know
an application that is receiving information getting pushed information to it that it will use okay
let us explore the pricing for azure service bus so they have three different pricing tiers basic standard and premium
the more expensive tiers provide more functionality which i would hope that would always be the case for most
services but on the right hand side we have our available pricing uh and you can see it's based on
a metric so like mu's per per month and things like that or operations per month and so here is the chart so let's take a
look at what the differences are so i highlighted topics in yellow to show you that it is not basic it's not in the
basic tier you have to have standard in order to utilize that and you know for your purposes as a
person learning as a service bus you absolutely need to uh give topics and queues a go because
qs is the classic messaging model and topics is pub sub and the only way you're going to do that is by using
standard which we will do in the follow alongs but notice in basic we don't get topics transactions deduping sessions
for to send via no resource isolation geo disaster recovery java messaging support az support and then some of
those aren't supported in standard but then in [Music]
premium we get basically everything and then notice like the big bump in terms of the message size between basic
standard and premium so again the thing i really want to remember is that
in basic there are no topics okay [Music] all right let's take a look here at dead
letter q this is just a general concept that is common for messaging's pub sub systems
and is enabled in azure service bus for both queues and pub sub but a dead letter queue is a service implementation
to store messages that fail to deliver and common reasons for failed messages would be things like message that is
sent to a queue that does not exist the queue length limit is exceeded the message length limit is exceeded the
message is rejected by another queue exchange the message reaches a threshold read
counter number because it's not consumed sometimes this is called a blackout queue or backup queue sorry
the message expires due to per message ttl so time to live the message is not processed
successfully so dead letter queues hold messages that uh that have failed to deliver a dead letter queue could be
used for things like monitoring failed attempts requesting failed attack re-cueing failed attempts so try it
again or trigger a follow-up action so maybe a remediation or a response so very useful feature but there you go
[Music] let us talk about azure service bus command cli or cli commands because if
you are doing the az204 the developer it's very important to know these these commands because they may ask you them
so here i have a bunch of example commands so just quickly looking at this here we have creating a resource group
creating a service bus namespace a queue an authorization rule and so it's pretty straightforward you
know az service bus etc let's take a look at some commands we should know or sub
commands i should say so when we say command the command is service bus and the sub command is what
follows that so we have ones for geo recovery alias so we want to set the geo recovery for it we don't have an example
here on the right hand side for migrations uh for namespace for queue for topic and so on the right hand side
there we have one for namespace we have them one for creating a queue but you'll notice here that we don't
have anything that kind of manipulates or adds messages to cues so the azure service bus does not have cli commands
to send messages to the keyword topic unlike azure storage queue which i found out which was kind of interesting so to
send messages to the queue you need to use an sdk like node.js so you'd install it
and then you'd have to write some code and we do do that in the follow along but i just want you to
remember that azure service bus you cannot send messages to the queue or topics via the
cli and you know that kind of sucks because you know if you want to do some easy
test uh it's unfortunate that functionality is not there but anyway that's the difference between that and
azure storage queue so there you go hey this is andrew brown from exam pro and we are going to take a look at
service bus so what i want you to do is type in service plus at the top here what's interesting is that this is the
old icon they have a new icon so just realize that there's some inconsistencies there and that's not my
fault that's azure's fault but the first thing we need to do is create a namespace because a service bus
is kind of like you know storage account where you can have a variety of different kinds of storages well you can
have uh more than one type of messaging system and so we have our traditional one uh like event messaging similar to
storage cube but with first and first out functionality and we have pub sub via topic so what you'll do is create a
new service bus namespace and i'm going to create a new resource group and call this the az204
um service bus and we'll say message or sorry cue
because we're going to do a queue and then we're going to do a topic separately
and for this i'm going to call the namespace service bus queue to keep it simple
we'll let it launch wherever it wants to launch and notice there are multiple pricing tiers
depending on the tier affects the functionality so if we do basic we're only going to have access to queue we're
not going to have access to um topics and so this is totally safe and
fine to do like even if we did premium it's fine because it's based on your consumption
it's not based on um you know you're just having it holding around so we'll get the basic one here and we'll go to
networking i don't think there's anything interesting there we'll go ahead and review and create
and we'll let that create click create again and it's deploying as that is deploying
which will not take too long what i've done is set up a private repository here you'll probably see me use this
throughout the course it's literally an empty repository because i already have the code done i've been doing the
follow-alongs and documenting them here in the free azure developer associate but when you're doing follow-ons with me
you should do them with uh do them from scratch and then if you need to you can reference the stuff here so um i have
this separate repository i have a git pod account which has a free tier you can totally do this in your own visual
studio code on your little local machine the reason i'm doing a git pod is i always want to show you how to set up
the cli and those other tools and when you launch git pod it gives you a blank environment so i'm just going to
launch that up there as that's going we'll go back here and take a look and see if this is ready just hit refresh
here it is still going but we already have our environment and while that is going in the background i
want to go install the azure cli so we don't even have a single file here i'm just going to say readme.md so i can see
what is going on here maybe we'll just dump things in here as we go i'm going to go get the azure
cli linux because this is running linux ubuntu
here so something you should always check is like what linux version
am i running if you're on windows of course this is going to be different but even windows using the windows subsystem
linux is using ubuntu as well so what i'll do here i'll go to the first link nope that's not the one i want maybe the
second one there's usually like a command here i can run uh
which linux version am i running
let's try this one here it's usually like so maybe it says here cat proc version
it really does vary based on what you're using and so here i'm going to go file uh or sorry terminal new terminal
we'll paste that in there hit enter and so here it says linux 513 ubuntu 11. uh 18 so
[Music] i know that this is ubuntu i don't really like that one there
um let's try this instead because this just doesn't read very well
there we go we're running ubuntu 2004 i already knew that um but i just want to double check and
the reason that matters is that when you're installing the cli it might matter what version you're using
so we're going to go here and i'm going to go to linux and the instructions might vary this one says 16 18 20. so
they're all the same here and we have this one liner here that we'll install and what i'll do here is drop it in here
and hit enter not sure if this font is too small so while that's going i'm going to see
if i can bump up our font here i'm looking for the terminal font size here terminal
let's just say 20 here there we go and so the azure cli should be installed
so i'll just type in clear so let's say azure or az to run it
looks good to me so we'll type in azure login um i don't want to log in with um that way i want to log in with device
so we'll do a z login device because if you're on your regular computer you can just click a button and go to the
browser but i'm not gonna be able to do that so i'm gonna have to do a device login
device well i'll have to do it the wrong way first to do it the right way
so hit enter because the problem is if i go here it's going to go to localhost because it's trying to launch in my
local machine so it does that and that's no good and so here it says do the az login use
device code okay so that's the one i really wanted to use so hyphen typing use device code enter
and that will give us a code so what we do is we will need uh this link here so i'll have to expand that
to here and then i'm going to go ahead and grab that code
continue let's now go back this will authenticate it'll just take a second here
close the tab here there we go maybe i have to close the tab uh and so now i'm authenticated so i
should be able to uh do whatever i want um what i need to do next is
create ourselves a message queue so we'll go to the resource here and notice here the entities it always
says q now if we had uh other than the basic the standard plan then we would see topics here
we'll go click into q click create a new queue i'm going to call it my queue we have some options here the queue size
can go up to five gigabytes the max delivery count so this is the maximum deliveries uh time to live that is how
long they live in the queue before they are dropped out or they are dropped into a dead letter uh a system there we have
lock duration so the set the amount of time a message is locked for other receivers you can enable partitioning uh
that's pretty complicated but we'll go ahead and create our cue and so this should be pretty darn quick
there is our q we'll click into it and you'll notice that there isn't really a way to view messages there's not a way
to add messages uh here we have the service bus explorer
which i guess technically you can send and receive here i had not noticed this
before at least it was not working for me so i suppose we could send a message here saying like hello world
this literally wasn't here last time i checked here um and we can go ahead and just hit
send okay and notice here it says there's one active message and we can receive it say
yes and so it says it received the message it's not
showing us the answer so i guess there kind of is something here i guess they're still working on it
but uh mostly what we're going to have to do is do things programmatically so that is why we have this account so
what i want you to do is open a new tab here we're going to type in azure service bus
documentation because we're going to grab some code there modify it make it our own so it's a bit easier to work
with so here i'm in the service bus we'll go to tutorials i'm not sure if this one is the right
one azure service bus documentation queue it's the same thing here
but this doesn't look right what is service bus i mean it is the right page but it had a
couple tutorials here uh that i had here so we'll type in like azure service bus uh tutorial
topics sometimes things aren't where you think they're supposed to be
okay i'll type in service bus seems like the same page again oh it was quick starts sorry so we have
tutorials here and then we have quick start so under the quick starts this is where i was finding uh the example code
that i thought was okay notice that we could do everything via the cli um that is not that fun but i mean this only
does the creation of it doesn't necessarily do sending and receiving messages notice so that we only can use
code so we'll use javascript because i think that will be the easiest to use uh so i already have node node comes
pre-installed on git pod you'll have to figure that out for yourself on your own machine or you can just use git pod as
well because it does have a generous free tier what i'll do is go ahead and paste on in
this command it doesn't seem to want to paste today so i'll hit copy and then we'll go back here and go right
click enter paste hit enter and so what that will do is install that library
if you're not very familiar with node.js package.json is the package manager and this is showing that this requirement is
there i want to install one other thing uh called env this will make our lives a lot easier
um for node it comes for different things but i
just wanted for javascript here so then we'll do npm install emv hyphen save
that's just a safe way for us to pass along our environment variables and so now both of these are installed so what
we'll do is we'll go back over to this code and we'll scroll on down and they have one called send and they have one
called receive so what i'll do is create myself a couple files here so we'll have send.js
and we'll have receive.js and then what we'll do is go ahead and copy this is the send code so we'll put
this in the send js file and then down below we have the receive code
and we will paste that on in there so i'll just make this bigger and we'll take a quick look here at what it's
doing so what this does is it imports the sdk for service bus we need to set a connection string we need to set the
queue name here is a bunch of messages that we are going to be passing along here we establish a service bus client
very common in all sdks to set up a client first then we are creating a sender and then here we are doing uh
create batch messages so it's a way of sending messages in batch very efficiently so we have a for loop here
and uh so it says there's a batch and then it says try to add the message
to the batch if it's not wait until it's ready then send the message okay so pretty straightforward for that
code receive is going to be similar so connection string queue name create that client create a receiver
and then from there we will set up a handler for the receiver an error and so then we'll subscribe and we will listen
for the message and handler so even though we are doing cues it's called a
subscription still so just don't get too mixed up with that what i want to do is just make sure that
we're passing our environment variables in safely our configuration so this is pretty standard or
good best practices when working with any language the idea is you don't want to hard code your values so i'm going to
do process env and we'll do connection string here and then we'll do
process env cue name this is the way you grab environment variables in javascript for
every language it's a little bit different okay
and i believe these are the same so i'm just going to go ahead and grab that there like this
and i'll paste that on in here and i want to load environment variables so i'll make a new file here called env
this is all part of that env dot thing then we're looking at emv dot i'm just pulling that up again here
or dot env you're gonna get the right one because we need this line here require
dot uh dot env config that will load the environment variables uh it'll load it
from that env file so we will go above here and hit paste
and then we'll go to the receive here and do this as well and in here we need to define these so
i'm just going to copy this so i don't have to type it out by hand we will paste that on in here and so i just need
the q q name and connection string we'll just say equals and then equals
so our q is called my q and then we need to go grab the connection string
so i'm just thinking here this is probably yeah it's at the name space level
and we'll go to shared access policies notice it's called shared access policies remember when we're doing the
storage key it was like called key access so it's totally different interface this is what i'm talking about
where azure is inconsistent we're clicking on the root manage shared access key
probably could create your own so it doesn't have full privileges but for this purpose we're just going to use
this one on the left hand side we have a primary and secondary we're going to use the primary one
and we will go back here and we will paste on that value in so i'll paste that in there notice we
don't have to do double quotations here it should already escaping double quotations but
we're doing the cli when we did the storage accounts that wasn't something that we could do um so we have these two
values here so they should get loaded when we use them and this should all be good so we'll
type in node send js and hopefully it just works fingers crossed and so it sent a batch
of messages to the queue so we'll go back over to our queue here and see if we can see anything
and we'll click into here i'm just trying to see so there's 10 active messages that are here right now
and so what we'll do is we'll receive all those messages so we'll go up and hit node
receive js and so this code is now receiving those messages from
the service bus queue and we're just going to wait here because it takes time for whatever reason to
finish here but we'll give it a little bit of time here to figure out that it's done
still waiting there we go and so that's all there really is to it so
that is q and we will do this again but next time with topics so what i want you to do is make your
way over back to your resource groups we'll find the one that we just created which was uh this one here
az204 service bus queue we'll delete this service group hit delete delete and there you go and
as always double check to make sure that you've uh for sure delete that stuff and that's it for
service bus q will do topics next okay [Music] hey this is andrew brown from exam pro
and we are looking at service bus and this time we're looking at topics so what i want you to do just like before
is go to the top we'll type in service bus and we'll go to the service bus service
you can still see the old one is there it should be deleting that's how slow this thing is but we'll create a new one
and we will create a new resource group and we will call it az204 service bus topic we'll say okay we'll
name this service bus topic and this time it already exists if it
does just dump a bunch of numbers here on the end because it's unique based on that so
that's not having a domain name if somebody has it you're gonna have a problem and so here i'm going to go to
standard because in order for us to use those additional features we will need to be on the standard plan so we'll go
ahead and hit review and create and that'll take a little bit of time to create but while that's going i'm going
to launch my environment here so this was the one i was just using a moment ago
with github and so what i'm going to do is i'm just going to go back to our repo here i'm just doing it off screen
because i don't want to expose all my stuff here and again if you want to you can do this
in your local visual studio code i just want to show everything from scratch every time so
here's my empty repo with git pod and so i'll just close this one and it will vanish all that code is now gone i'll
let get pod this will launch a new environment it's trying to tell me to open the last one nope i'm gonna make a
new one for this workspace here and we'll get going here in a moment so this namespace has been created so we'll go
ahead and hit create and uh i guess it's still making i thought i
already deployed it there but i guess i didn't okay while that's going we'll go ahead
and install the azure cli so we'll type in azure cli linux because that is what we're using here today
we'll go to linux here we'll scroll on down grab this one liner here to install it i'm going to open up my terminal
your terminal might be somewhere else allow i'm going to go ahead and paste that on in there that's going to install
our azure cli as that's installing it will be done here in a moment
let's see if this is done this is still creating the namespace i think
and this is still installing shouldn't take too long while this is going we can start
grabbing the code for this so for this we'll type in azure um azure service bus
documentation and this i found it under the quick start so we will go to quick start serve
topics and services because there's some code here that i want under the javascript
and we have one for send to topic so we'll just grab that name there
make a new file sent to topic.js and we'll go down here to this other one this is
receive from subscription you're going to notice this is very similar to doing a queue
the difference is that you can have multiple subscriptions consume the same stuff
our cli is done so we'll type in a z login um i can't remember what it is we'll hit
enter here i know this is the wrong way but i just can't remember what it is i want use device code that's what it is
so i'll go ahead and grab this and i'll just hit ctrl c to exit out of that and then paste that in there
and then we'll do it the way that we actually want to do it so then we'll go ahead and grab this for device login
we will provide the code as suggested here we will hit
continue we will close this and it will say that we are authenticated so we are now
authenticated we can use at az or the azure cli um
i don't know if we need the cli well we have it anyway at least we had to authenticate so at least that was out of
the way i'm not sure we're actually going to use the cli in this one um but what we'll do is go to the
resource here and we want to create a topic this time around and i'm going to call this my
topic we have a topic size between one to five you have a ttl
you can do make sure there aren't duplicates we're gonna go ahead and create this topic
okay so just like last time we need to install a couple things so if we go to the top here this should
be like an npm install here it is npm install service bus so we'll paste that on in there
we'll need our dot env so lookup.env again and we will just install npminstall.env
for environment variables hit enter we'll create ourselves a new.env file and we will need to look at what
environment variables we are going to need so uh looks like we didn't copy the send to topic content at least i didn't
so we will go back over to here and go up here and get sent to topic paste that on in there
we'll go all the way to the top and i'm looking for what we need here so here we have one which is
connection string whoops do not mean to delete all that so we'll hit process env
connection string and we have to spell it right or it's going to have a problem
and then we have process env topic name i think in this one it actually also has
subscriptions so we'll have to have a third one so i'm just going to copy these two over so i'm to
type it 100 times and this one will be process env subscription name you're
noticing i'm not having the um semicolon that is optional in javascript so it won't break anything if i don't have
them there subscription name and so these are the three that we'll
need i'm gonna just split this to make my life a little bit easier and i'm just gonna copy this here paste
not exactly how i wanted to paste it but that's totally fine i'm gonna copy
this i don't think we made a subscription yet but i know we're going to call it my subscription to make our
lives easy so we'll say my topic
my subscription and we'll grab this in a moment so we'll go back over
to azure we got too many tabs open here while we're here we might as well go
grab this code um before we go back to azure so i'm going to grab this dot env config that is going to load
our configure configuration environments so we go to the top here
paste that on in there for both files again best practice never to hard code your values always pass it in like that
with environment variables we'll go back to our service bus the first thing we need well we made a topic we're going to
need a subscription so if we go to topic here and click into it we can create
ourselves a subscription so we'll go here we'll say my subscription notice we have max delivery account we
have to set this between a value of 1 and two thousand i'm going to say 12 for fun
we can set the idle notice here if we want to have first and first out we would check
box enable sessions we're going to leave that alone does not matter too much for our demo
so we'll go ahead and create that subscription it is created now what i need you to do is go back to the service
bus topic namespace and from there on the left hand side we're going to shared access policies
we're going to click into the root manage shared access we're going to grab the primary connection string key the
secondary would work too it's just and second optional one because they always give you a two
we'll paste that on in there and if this is all correct this should just work so now what we'll do is type
in node um actually we did want the cli
installed because i wanted to show you uh that there was stuff in the queue whereas with um storage queue uh when
we're doing azure storage queue i couldn't show you because um i just didn't know of a command i believe at
least i think that was the case but hey we'll take a look and see what we can see okay
um so actually we didn't do it at any time i wonder which one i did that for let's
just double check here i have um off-screen here somewhere [Music]
my instructions because i run it for one of these maybe it was for the queue that we did it
yeah i didn't do the last one but i did an azure service bus q show and the idea was to show you that there was a message
count ten so you could see the queue but i think that since we saw it in the ui i just wasn't too worried about it now did
i do it for this one i'm not sure yeah we do a topic show so we'll do that for fun
but first we need to insert our messages which are part of the topic send i don't think we read through these so let's
just quickly read they look very very similar to the last one so you have your messages that you want to send you
create yourself a client you create yourself a sender you create a batch message send message
like it's basically identical like i can't even tell the difference here except here we're supplying a topic
name so um i mean i see it here
ah here create sender the topic name is specified there so i just imagine that instead of providing a topic name you
provide the queue name and that's how it knows the difference but anyway what we'll do is go ahead and execute this
code we'll say node send to topic js okay so it sent the stuff now we didn't
do this last time so let's do it this time around so we'll type in azure service bus topic show
and here we'll need to set the resource group so this was called i don't know let's go
take a look here what is our service group called it is called az204
bring that down a bit here az204 service bus topic we need to specify the namespace
name so that's just called service bus topic hyphen
723 because we couldn't get the number we want and then we need to specify the
name so i assume it's the name of the topic so the top is my topic and i'll hit enter
and it says service business name is misspelled or not recognized by the service did you
mean service bus yeah i gotta spell that right hit enter
and that looks fine but i just want to specify it as output yaml just hit up on your keyboard if you want to go back to
those previous commands let enter so this is a little bit easier to read and so what we're looking for here is
just kind of like the message count does it show us here is the description count
i don't see it so i guess it's not visible in the same way as uh the queue is like if we did
this and we didn't do the last one it would we just thought like that message count there
but let's take a look at what we can see in the cli to just see what information or a ui that our portal so we can see
some information here so we have one subscription here we'll click into here um we have max size
incoming request 12. um yeah i don't really see it message count
10 max delivery count 12. okay so i guess there was 10 and that's
the 12. so i guess that's where it's being counted let's go run the other one to receive i'm going to just double
check to make sure that we set those we did that's all good so we'll type in node receive
if you're wondering how i'm auto complaining without typing that i just hit tab on my keyboard so it's receiving
those good and that finish i'm going to go ahead refresh see if there's any
difference here notice that the message count is zero so when the topics were there it was held
in the subscription saying 10 is here and they've yet to be delivered when we ran it they were received and so that
number cleared out that's all we really need to learn for um uh topics so we are done with topics so let's make our way
over to resource groups and we'll go over to our service bus topic
go ahead and delete this here there it is deleting we are all good to go
um and you know just as always don't ever trust azure to delete these things go back and check in three four minutes
make sure it's deleted so you just don't have things lingering around but yeah there you go
[Music] all right let us compare event grid event hubs and service bus because these
all are event driven services for application integration and use event buses as a means to work with event data
but it gets confusing because i guess you could say they all have overlapping responsibilities so let's kind of clear
that up so we have event grid event hubs and azure service bus so the idea is that event grid is a serverless event
bus and i would say that you'd probably want it for azure service to service
communications the idea is that you just have let's say you know you're like integrating from
service to service so you're not deploying necessarily fully custom web applications uh or traditional
applications onto virtual machines it's more for like uh cloud native cloud first kind of builds
they're dynamically scalable they're low cost at least one once delivery of a message
for event hubs this is streaming data so we get low latency can receive and process millions of events per second at
least once per delivery of an event and streaming you know because of the nature of it is generally more expensive than
traditional things like aq or a pub sub for web applications which is what a azure service bus is so
reliable asynchronous message delivery that requires polling and i guess pushing technically for pub sub advanced
messages features like first in first out batching sessions transactions dead lettering temporal control routing
filtering uh duplicate uh detection at least once delivery of a message so it really sounds like and there's one
option here or optional order delivery of message so it sounds like azure service bus has it all but really it's
not it's not at the same scale as event hubs and it's not at the same like level of
um like hands-off kind of approach that event grid gives you with serverless
but you have all those options there and you will use them all but there you go [Music]
hey this is andrew brown from exam pro and we're taking a look here at the introduction to redis so what is redis
well redis is an open source in-memory database store and it acts as a caching layer or a very fast database it has a
lot of utility but those are the two most common and one thing is that because it's in
memory that data is highly volatile so there is a chance of a data loss so even though
it is super fast within milliseconds like five milliseconds 10 milliseconds um you know it's not used as a primary
database but you know there are tools out there that have made them more durable uh so it really is up to you uh
but the other thing about redis is that it's a key value store so uh it's not like using a relational database um and
so it has its own type of data structures data types so we have string sets sorted sets lists hashes bitmaps
bit fields hyperlogs geospatial indexes streams uh and the ones in red i do want to go over
with you because i think they're worth your time but the idea is that to use redis you're either going to use
like a programming library they're pretty much available in every language um and or you could use the redis cli so
getting my pen tool out here for a second here you can see we're logging to the right of cli we're doing a ping to
the server it gives back a pong uh and here we're setting a simple key value so we say my key some value some value is
the string we are setting and so it is that simple it's a very simple database but it's super super fast but let's take
a look at some of those data structures or data types so we can see uh how we're going to interact with this database
all right let's take a look at our different types of data types data structures that we have available to us
in redis the first is strings the most basic one the most important one because everything is basically strings at the
end of the day here in redis strings are binary safe so they can contain data such as jpeg image serialized ruby
objects so they can represent different things that can represent numbers which is kind of odd and strings have a max
length of 512 megabytes to get a lot of room there often you're doing a get in the set here so you say get
uh you know get this uh key you get back that string set this key with that value so very simple to use um strings can
represent numbers and so now you have these special functions for strings called atomic counters so you can do
things like increment decrement uh increment by certain amounts so very very clear here so notice it is a string
but we can increment it and now return 11. and it interprets it as an integer which is
interesting there so the most common string commands would be things like get set
append depend would just add something to the end of the string or check check if a string exists based on a key so it
exists you have lists so lists are ordered collection of strings uh they're not going to give you a guarantee of
unique uh unique strings or unique uh elements within that list there so you can have
duplicates but um you do things like l push l range it's kind of like having an array uh in other
languages here they're doing a range notice it does negative one but it actually returns all of them so that's
kind of a trick there to uh get all the elements in a range but you
could say like one to two and then it would just return one and two common commands for lists would be l pop
r pop l push l uh uh pos was just for position so el pop would uh remove the last element and
return it uh or on the left hand side so the first one our pop would be on the right hand side so if you did an l pop
and return hello you did an rpop will return world and also remove it from the list l push would add a new thing on the
end help uh pause is creating a position so you provide a string it'll tell you it's positioned in the list then you
have sets so seems like they're like lists but they're unordered so there's no
guarantee that if you request the stuff in the list it's going to be in the order that you expect it but
the advantage though is that things are unique in the list so if you push the same thing
you add the same thing to that set like world twice it'll only be there once that's when you need a unique
amount of things in a collection there most common commands here is s add so add a member s
members show us all the members s move move a member from a from one set to another s pop similar to
um our pop uh like what we saw with um list so just remove the
one or or multiple random members from a set we have
hashes so hashes represent a mapping between string fields and string values um redis was originally
coded in ruby so they're basically like ruby hashes but if you don't know ruby hashes
there's json objects probably seen those before ruby hashes of course python dictionary so the idea is that you set a
you say okay i'm making a hash i'm going to set a field which is just a key it's string field two uh and it's value there
so um pretty straightforward common commands for hashes would be like h h get to get a value hdl to delete a value
hm set to set multiple values hm get to set or to get multiple values um
h valves to get only the values h key or probably keys to get uh only the
the fields which are keys in this case here we have sorted sets so these are similar to um sets
but um they have an associated score so when you add something you use a zed ad for
this but you put it you give it a score like here one one two and uh these are really great for
leaderboards um or where you need to score things and sort that way so common commands here would be things like zed
ads to add something with a with a certain score z rem is for removing
z range is when you want to return a range of elements in a sorted set score zed rank would be return the rank of a
member in the sword set score uh uh set sorted so it's not giving you i said sort of stored but it's sorted so it's
not giving you its score it's giving you where it's ranked within the list and then you have z-score and so that
that you provide um the the value and it would tell you what its
score is so those are all the data types that you should get some hands-on with there are some interesting ones like
stream and things like that but uh this is pretty much the core stuff for redis redis is not hard to learn or use
um so you just gotta jump in there and use it okay [Music]
all right so we just took a look at redis now let's talk about azure cash for redis so azure cash for redis is
based on the popular open source redis cash or data store it gives you access to secure dedicated redis cache that
microsoft manages and that you can access from any azure application all major cloud service providers have a
managed redis service or database because it's just super useful and we'll talk about the
use cases here in a moment azure cache for redis is suited for high throughput and low latency requirements where the
same data is often requested so remember those key three things to know when you're supposed to use
reddit there are serverless databases um
like nosql databases sorry that can operate extremely quickly like with a guarantee of a certain
amount of time but redis is just like super fast it's faster than everything else out there
because it's in memory azure cache for redis is commonly used for these scenarios and i really do want
you to pay attention because these are very common very practical and as a developer you really should know these
so the a very common use case is storing session data if you have a web
application and it's deployed to uh multiple virtual machines um a hard part about having an application running
across multiple virtual machines or containers is that um knowing what was the last like where the where the
session is stored the actual uh state of a user and so you can't have it running on one virtual machine because then the
other ones won't know about it so you need to store it somewhere else so redis is a very common use case for that
storing cached html or json uh this very common use case to speed up response times another thing that we normally see
would be mem uh memory dme mdb
it's another caching solution specifically for html um but yeah this is something you put in front of your
web application to speed up web pages or api calls another one is a job or message queueing system
if you are in the ruby on rails world a very common queueing system is called sidekick and it uses
redis or another thing again in ruby on rails if you want to use action cable which is for doing pub sub like real
time like building a chat like a chat system or a game that's backed by redis um so those solutions aren't going to
change you can usually use redis for those things another one is using it to be in front of a database to reduce
recontention recontention means like too many reads are hitting the database so it's slowing everything down so you can
put a cache in front of that to improve response times for fetching data to just take uh take the load off the
database maybe you have a database that's expensive to call so it's going to save you money
and so you can put this in front of cosmodb or azure sql very common cases or any kind of database really
that returns data and so this is actually called the aside cache pattern something you want to kind
of remember uh where it's going to sit in front of the database so if the date if the data is
up to date um and it's there in the the redis store it'll take it from there if not it'll go
to the database and the database can be proactive and send data to redis so that it doesn't have to
do a hit or miss it can just have the data there knows that it's there just to kind of visualize this just to
see where those are so i'm just gonna get my pen tool out here so the first is like
where uh um redis or azure cache redis is sitting between your application and the request
so maybe you're hosting on azure app services the idea is that before um a request makes it into your
application it's going to check that cache and so that can reduce the load on your application improve the load times
if if there's things that are computationally expensive then there's your database so over here
between your application to your database it's going to hit the redis cache first
if there's data it will return it if there's not it will go straight to the database and then the other common use
case here is to have a fast database for real-time data maybe for queuing jobs um
video games leaderboards things like that so uh yeah that's pretty much it for azure
cashflow redis just gotta get some hands-on experience with it but very simple uh not too complicated very
useful [Music] you
Before taking the AZ-204 exam, it's recommended to first complete AZ-900 (Azure Fundamentals) and AZ-104 (Azure Administrator Associate) certifications. This progression builds foundational and administrative knowledge, preparing you for the developer-focused AZ-204 topics.
Azure Functions are event-driven pieces of code that respond to triggers and integrate with other services via bindings, requiring a storage account for state. Hosting plans include Consumption (serverless and scales to zero), Premium (pre-warmed instances for predictable performance), and Dedicated (App Service Plan for long-running needs). Choose based on scalability and performance requirements.
Azure Resource Manager (ARM) templates are JSON-based declarative files that define Azure resources and configurations. They enable automated, repeatable deployments by specifying parameters, variables, and resources, supporting testing, modularity, and integration with CI/CD pipelines for efficient infrastructure management.
Azure API Management secures APIs through authentication, throttling, and caching policies, while providing features for API versioning, documentation, and analytics. It offers a developer portal for API consumers to explore and test APIs, making API management streamlined and user-friendly.
Azure Container Instances (ACI) provide fully managed, quick provisioning of Linux or Windows containers for running applications without managing servers. Azure Container Registry (ACR) is a private Docker image repository that stores container images, supports image build automation, and integrates with CI/CD pipelines. Together, ACR manages containers' images, while ACI runs the containerized workloads.
Azure Monitor collects and analyzes telemetry from applications and infrastructure, providing metrics, logs, and alerts. Application Insights focuses on application performance monitoring with SDKs for various languages, enabling usage analytics, custom event tracking, and diagnostic insights through dashboards and workbooks to optimize app health and performance.
Azure offers multiple storage types like Blob, File Shares, Queues, Tables, and Disks with performance tiers Standard (HDD) and Premium (SSD). Blob storage supports Hot, Cool, and Archive access tiers, which help optimize costs by aligning storage pricing with the data access frequency; Hot for frequent, Cool for infrequent, and Archive for rarely accessed data.
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries
Unlocking Azure DevOps with the A400 Certification: A Comprehensive Guide
Master Azure DevOps with our A400 Certification guide. Learn key topics, strategies, and tips for success.
Ultimate Guide to Azure DevOps Certification Course: Pass the Exam with Confidence
Join Andrew Brown's free DevOps certification course and learn everything you need to know to pass the Azure A400 certification exam!
Mastering Excel 2019: Perform Operations Using Formulas and Functions
In this comprehensive guide, we explore the key domain of the Excel 2019 exam focused on performing operations using formulas and functions. Covering essential topics such as inserting references, calculating and transforming data, and formatting text, this video provides valuable insights and practical tips to help you succeed in the exam.
Understanding Cloud Computing: A Comprehensive Guide to AWS and S3
This video provides an in-depth exploration of cloud computing, focusing on AWS services, including EC2, S3, and the importance of cloud architecture. It covers practical examples, deployment models, and the significance of cloud roles and policies for effective management.
Essential Tips for Passing the Excel 2019 Exam
In this video, viewers receive valuable tips and advice for successfully preparing for the Excel 2019 exam. Key points include understanding the exam format, familiarizing oneself with essential functions, and strategies for managing time and stress during the test.
Most Viewed Summaries
Kolonyalismo at Imperyalismo: Ang Kasaysayan ng Pagsakop sa Pilipinas
Tuklasin ang kasaysayan ng kolonyalismo at imperyalismo sa Pilipinas sa pamamagitan ni Ferdinand Magellan.
A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.
Pamamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakaran ng mga Espanyol sa Pilipinas, at ang epekto nito sa mga Pilipino.
Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.
Pamaraan at Patakarang Kolonyal ng mga Espanyol sa Pilipinas
Tuklasin ang mga pamamaraan at patakarang kolonyal ng mga Espanyol sa Pilipinas at ang mga epekto nito sa mga Pilipino.

