Starting with a special thanks to Osvaldo Daibert (GBB at Microsoft), Marcus Milhomem (CSA at Microsoft) and Kedar Joshi (PM at Azure API Management’ team) for the collaboration on the proposed solution.

If I could elect a top ask from my clients when designing a new architecture that envisions a new environment in the public cloud, that would be “I need to keep my entire set of data, APIs and underlying applications private”. We all know that this is a valid concern, as security becomes, every day, more critical to companies all around the World.

That’s why I’ve decided to put together some articles that circle-back those concerns. If you hadn’t the chance to see it, recently, I wrote a blog post here on this website that proposes a viable option to protect APIs (at network level) sitting under API Management (APIM) service in Azure by leveraging another Azure’s service, Application Gateway.

Today, I want to discuss another very important aspect of these “private architectures”, as it relates to the environments where these APIs are being hosted. Hopefully, you’re going to join me in this discussion.

Discussing “private environments”

A secure and reliable architecture for hosting APIs is often a composition of several pieces coming along. Gateway, Firewall, API Management, Identity provider, container web (where the API is going to be living) are common parts of it, as you can see through the example displayed by Figure 1.

Figure 1. A reliable and secure architecture for APIs in Azure

When it comes to the container web, meaning, the place where APIs has being deployed to, in Azure, we’ve got several options, which goes from regular virtual machines all to way up to Platform as Service (PaaS) environments, like Web Apps, App Service Environment, Kubernetes Service (PaaS like), Service Fabric and more.

The right approach (or, in practical terms, services pick) for your scenario will depend on several API’s operational aspects, usually mapped by gathering answers for some assessment questions. Among them, we could highlight the following ones:

  • Is there any API’s dependency on a given version of OS?
  • Does the API works in stateless mode or is it stateful only?
  • What is the technology stack involved?
  • Is there any need to manage the API’s execution environment at the VM’s level?
  • Is there any dependency of the given API to some external component?
  • Does the API communicate with external (out of its network) services?

Let’s assume that, after a couple of assessment sessions we were able to figure that a PaaS option would be the perfect fit for our needs so, we’re going for it.

When it comes to PaaS offerings for that purpose, there are some very good options in Azure, both container-based (Kubernetes Service, Container Registry, Web Apps for Containers, Service Fabric and Batch) and code-based (Web Apps and App Service Environment are the more popular ones).

Among those options just mentioned, some do offer the possibility of being deployed privately, meaning, they are reachable by other resources only from the inside of the same Azure VNet, and some don’t. There is applicability for both the scenarios on the real world, but on today’s post, as mentioned early on, I’ll keep myself focused on a specific service and how to make it happen privately – Private Web Apps.

Figure 2 presents the drill-down on the possibilities of the services.

Figure 2. Services drill-down

Web Apps x App Service Environment (ASE)

As you can see through the Figure 2 (dotted part), when it comes to Web Apps, there are two different approaches whereby we could achieve this post’s goal (I mean, host APIs privately):

  1. Leveraging regular Web Apps in partnership with some additional services (I’m referring specifically to Private Link/Endpoint and Private DNS).
  2. Leveraging App Service Environment (ASE) service.

It is extremely important for us to clearly understand the differences between the two approaches before we move forwards and dig into the implementation’s details, though. So, let’s do it.

App Service and Web Apps

App Service is a first-class group of services in Azure designed to cover demands related to modern web applications under the PaaS model. Among these services, there is one in particular which is specifically tailored to host web applications, it is known as Web Apps.

Web Apps are nothing but either a virtual instance of Internet Information Services (IIS) for apps deployed in Windows or a docker container holding a specific image if deployed in Linux environments.

Web Apps run on top of App Service Plans, which can be seen as a supporting service in Azure that wraps up compute resources (virtual server instances) needed for a given App Service’s service proper function. A single App Service Plan can support multiple App Service’s services and you can easily set up both scale-up and out routines towards reaching high-availability. Figure 3 illustrates that idea.

Figure 3. App Service’s services on top of App Service Plan

By design (meaning, due to its PaaS nature) Web Apps are deployed in a specific place in Azure (internally known as Web Space) out or user’s control and it is publicly accessible. You can control a small set of functionalities of a given Web App, however, infrastructure-wise, there is a high level of environment’s abstraction performed by the cloud platform, which gives users the opportunity to focus on the application rather than “keeping server’s lights on”.

Because it is publicly accessible by design, it doesn’t mean it couldn’t be configured to be privately accessible only, though. By leveraging appropriate configurations and underlying services, that possibility becomes totally doable. This is what we’re going to discuss in detail later in this article.

Here are some key reasons by which customers decide to leverage Web Apps their APIs:

  • Multiple languages and frameworks – App Service has first-class support for ASP.NET, ASP.NET Core, Java, Ruby, Node.js, PHP, or Python. You can also run PowerShell and other scripts or executables as background services.
  • DevOps optimization – Set up continuous integration and deployment with Azure DevOps, GitHub, BitBucket, Docker Hub, or Azure Container Registry. Promote updates through test and staging environments. Manage your apps in App Service by using Azure PowerShell or the cross-platform command-line interface (CLI).
  • Global scale with high availability – Scale up or out manually or automatically. Host your apps anywhere in Microsoft’s global datacenter infrastructure, and the App Service SLA promises high availability.
  • Connections to SaaS platforms and on-premises data – Choose from more than 50 connectors for enterprise systems (such as SAP), SaaS services (such as Salesforce), and internet services (such as Facebook). Access on-premises data using Hybrid Connections and Azure Virtual Networks.
  • Security and compliance – App Service is ISO, SOC, and PCI compliant. Authenticate users with Azure Active Directory or with social login (Google, Facebook, Twitter, and Microsoft). Create IP address restrictions and manage service identities.
  • Application templates – Choose from an extensive list of application templates in the Azure Marketplace, such as WordPress, Joomla, and Drupal.
  • Visual Studio integration – Dedicated tools in Visual Studio streamline the work of creating, deploying, and debugging.
  • API and mobile features – App Service provides turn-key CORS support for RESTful API scenarios, and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more.
  • Serverless code – Run a code snippet or script on-demand without having to explicitly provision or manage infrastructure, and pay only for the compute time your code actually uses (see Azure Functions).

If you’re interested in knowing the specifics on how Web Apps works under-the-hood, please, refer to this article.

App Service Environment

App Service Environment (or ASE) can be seen as a dedicated version of regular App Services, just discussed. It means that it does take advantage of the same set of concepts and capabilities of App Service (including Functions and Web Apps) and gives you a fully dedicated (therefore, with the possibility to be integrated privately to an existing VNet in Azure) cluster to run APIs and web applications in general.

ASEs host applications from only one customer and do so in one of their VNets. Customers have fine-grained control over inbound and outbound application network traffic. Applications can establish high-speed secure connections over VPNs to on-premises corporate resources.

Usually, customers look into ASE when:

  • Very high scale is needed.
  • Isolation and secure network access is a requirement (private hosting).
  • High memory utilization is likely and has to be addressed.

Figure 4 showcases a common implementation of ASE. As you can see, nothing really different from a regular App Service except by the built-in integration with some existing customer’s VNet in Azure.

Figure 4. App Service Environment

Services comparison

Now that you’re familiar with both services, it would be valuable to put together a comparison between the services so it could help on an eventual decision on what service suits better to a given scenario or another. Please refer to Table 1 below to see the highlights between the services.

App ServicesApp Service Environment
Full support to Web Apps, Functions, Mobile Apps and more.Full support to Web Apps, Functions, Mobile Apps and more.
Support to both Linux and Windows.Support to both Linux and Windows.
Built-in integration with Azure DevOps.Built-in integration with Azure DevOps.
Built-in private deployment: No.
Can be turned private by leveraging other services in Azure, though.
Built-in private deployment: Yes.
Publicly exposed by nature.Depends on the deployment type selection. “Yes” for external, “No” for internal.
Built-in scale up and out.Built-in scale up and out.
Dedicated cluster: No.Dedicated cluster: Yes.
Containers support: Yes.Containers support: Yes.
Pricing: Cheap.Pricing: Expensive.
Table 1. Comparison between the main aspects of each service

It is not that hard to understand the differences between the approaches for each service. I mean, if you’re looking for a lightweight solution that might require some moderated level of scalability, security and good performance with a lower burden of managing the environment, you should give a try on regular App Services in the first place, as it is considerably cheaper than ASE alone.

But, if you’re looking for a highly secure and scalable environment that is isolated as web space and gives you higher levels of performance for behaving as a cluster, despite the fact it is naturally more expensive than a regular App Service, you should go for App Service Environment.

At this point, you should have a good understanding of the services we’re talking about, what advantages and disadvantages are for each one of them. We’re ready to move forward.


Americas University (AU), came to us looking for suggestions on a given scenario they have for hosting of APIs.

First, important to mention they have already made a decision on how this is going to work on the frontend side, meaning that, they have already defined a pretty efficient workflow for incoming requests.

Basically, every request will land in an Application Gateway (AG) that is already configured to route in those to APIs tagged as “external” following Azure API Management (APIM) mappings. APIs tagged as “internal” will be automatically dropped at the AG level. This procedure was very well documented here.

Now, AU’s technical teams want to hear from us about the best options to host the APIs itself as they found that both ASE and App Services would be suitable for their needs.

After going through an assessment with AU’s technical team to identify APIs operational model and technical needs, we were able to highlight the following aspects:

  1. The APIs will be migrated in waves. The first wave (the one we’ve been talking thus far) consists of bringing microservices accessed by internal applications and users only, so the load over them is not expected to be high.
  2. For obvious reasons, the APIs must be running in a private environment, meaning that trials of direct external access must be blocked right-the-way.
  3. When it comes to the technologies used for those microservices, there are many. From .NET Core to Node.js, passing through Python.
  4. The environment must offer the ability to integrate as easily as possible both with Azure DevOps and Jenkins.
  5. The load isn’t expected to be high but still, the would be great if the environment could be able to expand itself out automatically when demand goes up.
  6. Cost is an important constraint for the project so it must be considered when planning the architecture for this solution.

Proposed solution and architecture

To us, it seemed pretty obvious that a solution on top of regular Web Apps would make a lot of sense as a first try, so this is what we are going to do.

The proposed architecture for this is being displayed in Figure 5. Worthwhile to mention that on today’s post, well concentrate the efforts on making the grey-backgrounded area.

Figure 5. Solution proposed to AU’s request (private Web Apps-based)

Some general guidance on the proposed architecture can be found below.

  • We already have an APIM in place (deployed as an internal resource), so after deploying the App Service Plan, the internal Web App, and the API into it, we will need to establish the communication between APIM and the Web App.
  • Because regular Web Apps are public by design, we will take advantage of a service called Private Endpoint in Azure so that Web Apps will only be accessible from the inside the Private Endpoint’s VNet.
  • Private DNS is another service that will need to leverage towards to make the internal name resolution possible within the private VNet.
  • SQL Managed Instance is already deployed into the same VNet and API’s databases are already sitting up there too, so everything we got to do is to adjust API’s connection string to point to it.

Implementing the solution

As always, before starting deploying workloads in Azure, we went through an assessment process to understand what resources would be needed versus the ones already in place. Here is what I found:

VNet (vnet-americasuniversity) was already in place. There was enough space available to create the new subnets we would need (and they were two): webappspublic-subnet (/28) and privatelink-subnet (/28).

AU’s team pointed out that the resource group to be utilized for this solution should be the same used for the implementation of the frontend part (AUManager), so we took it.

All good! Let’s get our hands down to make it happen.

Step 1: Creating subnets needed

The very first step on this journey consists of creating the two subnets that will need to support the environment. The procedure of creating subnets in an existing VNet is pretty simple and very well described here. If you’re not sure how to accomplish this, refer to the link just mentioned to get there.

The first subnet (which we called webappspublic-subnet) we’ve created will be in charge for holding our private Web Apps.

The second subnet (privatelink-subnet) is the one we’re going to use to bear the private endpoint service. It does require a dedicated subnet to proper function so, here we go.

Figure 6 displays the two subnets created.

Figure 6. Two new subnets added to the existing VNet

Step 2: Creating a new App Service Plan

This step consists of creating the web-farm where our group of APIs will be living at, what we now are called “App Service Plan”.

Azure’s documentation does a good job of documenting how to create it either through the Azure Portal, Powershell, or Azure CLI. Please, refer to this article to see how to get this job done.

After doing this, I was able to see my Windows-based App Service Plan up and running on Azure, as showcased by Figure 7.

Figure 7. Service Plan to support Web Apps just created

Step 3: Creating a new Web App to host the first API

Now that we have our foundational App Service Plan in place, we can go ahead and create the Web App whereby the first API (called aumanager-geo-api) will be hosted at.

Again, I’m not going to redo something that both Azure’s documentation and other blogs do so well. So, to create a new Web App and tie it to the existing service plan, please, refer to this article.

Later we will implement integration between Azure Web App and Private Endpoint. For this, Azure requires the tier “P” for the Web App. Make sure to select one of the tier options under the “P” category.

In the end, I was able to see my Web App up and running, as displayed in Figure 8.

Figure 8. Web App up and running

Ok. The Web App is now created but, if you try to browse it over the internet by calling its address in your preferred browser, will you see that it responds publicly to both port 80 and 443.

By customer’s requirement, we know that this behavior is not allowed so we have to do something to address it and, somehow, deny external incomers. Here is where Azure Private Endpoint comes to play.

Step 4: Integrating Web App with Private Endpoint

Azure Private Endpoint is a network interface that connects your application privately and securely to a service powered by Azure Private Link.

Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. The service could be an Azure service such as Azure Storage, Azure Cosmos DB, SQL, etc. or your own Private Link Service.

Once again, Azure’s documentation is going to help us out. They’re a very comprehensive article explaining the specifics of how to integrate existing Web Apps with Private Link. I went through this process myself and succeeded. In the end, I was able to see a new connection appearing under the “Networking” -> “Private Endpoint Connections” section in my Web App, as showcased by Figure 9.

Figure 9. Private Endpoint created for my Web App

The immediate result of this integration are:

  • First, from now on, your Web App is no longer reachable from the internet directly, as you can see in Figure 10 below.
Figure 10. Access blocked from the outside
  • Second, from now on because our Web App is integrated to the local VNet through Private Endpoint, you can refer internally to this Web App with its internal IP, which in our case, is, as shown by Figure 11.
Figure 11. Internal IP assigned to aumanager-geo-api Web App

Great! Now the Web App is denying any external attempts of access, which is the expected behavior, and also we have an internal IP for local access. Next, we got to deploy the API itself to the Web App so we know it does work locally.

Step 5: Publishing the API’s code into the novel Web App

There are different approaches for us to go after it, right? I mean, from an internal bastion server at the same VNet, for instance, you could deploy it either through FTP or Web Deploy; or, if your code lives in a source control’s repository (like Github, Azure DevOps and such), you could configure a continuous integration (CI) and continuous deployment (CD) process to get it done.

Because Americas University’s technical teams already have a pipeline in place for deploying the referred API at Azure DevOps into VMs on-prem, I’m going to take advantage of it (code already sitting into Azure Repositories) and get it done through a CI/CD process.

5.1 Setting up a self-hosted build agent

The first thing we need to remember: the target Web App is no longer accessible over the internet, which means that Azure DevOps’s built-in agent builds won’t be able to access Web App’s SCM system directly (as, by design, it does rely on a public IP to get into the target environment), therefore, as the first step to accomplish it, we need to set up a self-hosted agent build within the same VNet as the Web App to make it feasible.

The very first step towards getting there is to create a single virtual machine at the same VNet of the Web App. This article shows up how to accomplish this in Azure.

Once I did it, I was able to see my virtual machine (called auagentwin) up and running on Azure, as you can see through Figure 12.

Figure 12. Self-hosted agent VM up and running

Then, I went myself through the process of having this VM configured to act as a self-hosted agent for Azure DevOps. This procedure is very well described in this article.

My self-hosted agent is a Windows-based environment. But, you could have the same results by configuring both Linux and Mac environments for that purpose.

Then, as a final step, I was able to go to VM’s host file to add two new entries (point the local IP to Web App’s FQDN) in it, as you can see through Figure 13.

Figure 13. Adding host entries to self-hosted agent VM

Finally, after configuring the server to automatically start the service and wait for jobs, I was able to see the agent with auto-login enabled and properly working, as shown in Figure 14.

Figure 14. Self-hosted agent properly configured

Finally, I would need to see if this configuration would be reflected in AU’s Azure DevOps portal. Figure 15 shows that everything seems to be working fine, as my agent is “online”.

Figure 15. Azure DevOps portal reflecting build agent on

5.2 Building the CI pipeline

I don’t want to go too far on the explanation of the pipeline because it is pretty self-explanatory but because this API is .NET Core-based, we’re taking advantage of a task built-in for that purpose, as you can see through the Figure 16.

Figure 16. CI pipeline

Important to notice that, on the right side, you can see that the “Agent pool” option is fulfilled with our self-hosted agent pool previously created, indicating that this pipeline is going to be executed at that self-hosted virtual machine. Figure 17 shows the artifact generated as a result of the building process.

Figure 17. Build process successfully executed

Next, we take the artifact just generated to foster our CD pipeline, which will effectively deploy the application into the Web App. The same way, CD uses our self-hosted agent for that operation. Figure 18 presents the configuration in place for the release pipeline.

Figure 18. Release pipeline whereby API is being deployed

As you can see through the Figure 19, once the process finishes, from the bastion server I have at the same VNet as the Web App (which has its host file properly configured to do the routing), I’m able to see the API up and running by calling its Swagger definition. Now, the API is ready to receive internal requests. The only remaining piece to be configured is the API Management mapping.

Figure 19. API up and running at the private Web App

Step 6: Configuring APIM to map Web App’s API

If you remember from the requirements list, both internal and external API’s calls should be managed by Azure API Management (already in place) and which we went through the configuration process in this post.

The referred APIM is ILB-based, which means that like the Web App we just got settled, APIM is also reachable only from the inside of the VNet. Hence, if an external resource tries to perform a call directly to APIM, it will be directed to the Application Gateway we have in front.

6.1 Configuring the Private DNS

Before going over throughout APIM’s API mapping, we have to set up a private DNS zone. Why? Because APIM is a PaaS service that doesn’t allow us to access the VMs running under-the-hood to configure host files, we need to figure a way to let APIM know that when is called, it has to route it out to, Web App’s internal IP. If we don’t do this, APIM will try to access the Web App (which is private) with its outbound public IP, and as you might be envisioning, it will be denied.

First, I went through this process to get the private DNS zone set up. The result can be seen through the Figure 20.

Figure 20. Private DNS zone created

Next, I had to link this DNS zone to the existing VNet so that, the local resources (including Private Endpoint) would be visible within it. To make it happen, I went myself through the process described here. The result can be seen in Figure 21.

Figure 21. Linking the Private DNS zone to the existing VNet

Finally, we got to add a new record set whereby the redirect will effectively happen. In my case, I’ve added one new entry with the configuration presented in Figure 22.

Figure 22. Adding a new record set into the private zone

6.2 Mapping the AUManager’s APIs in APIM

We’re finally ready to have our API mapped in APIM.

First, I’ve created and published a new “Product” in APIM. This product is going to represent the AUManager’s backend that is going to bear all internal APIs belonging to the app. To get there, I took advantage of this step-by-step tutorial. The result of that work can be seen in Figure 23.

Figure 23. APIM product created and published

Next, I’ve imported the API previously published into the Web App (aumanager-geo-api) with only one method (GetCountries) for tests purpose and tied it to this product just created. The path I followed to get there can be seen by following this link.

Figures 24 and 25 showcases the API properly mapped within the APIM.

Figure 24. aumanager-geo-api mapped into APIM
Figure 25. API’s internal configuration

Worthwhile to mention that, following the guidelines of the frontend implementation, APIs under APIM should be flagged as either “internal” or “external” so Application Gateway knows if a given request has to be routed to a valid backend pool (apimbackend) or a dead end (sinkpool). That’s why our API here has been checked as “internal” on the suffix field.

Step 7: Testing

Great! Now that every piece of the puzzle seems to be in place, we can finally go through the tests. For this, I’m going to take advantage of the bastion server I already have in place.

7.1 Testing through APIM Developers Portal

As you may know, APIM brings as part of its tooling set a very useful Web UI (called Developers Portal) that allows us to go through different aspects of a given API.

In this case, I’m going to leverage it to try out our GetCountries method. This method expects a pair of information. Here’s an example of how to make the call to the service.

Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6IkN0VHVoTUptRDVNN0RMZHpEMnYyeDNRS1NSWSIsImtpZCI6IkN0VHVoTUptRDVNN0RMZHpEMnYyeDNRS1NSWSJ9.eyJhdWQiOiJodHRwczovL2FtZXJpY2FzdW5pdmVyc2l0eS5uZXQvYXVtYW5hZ2VyLWdlbyIsImlzcyI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0LzA1ZDAyODQwLWFmMGQtNDQzNC05MWE0LTlkM2JhMzY1NGJiMC8iLCJpYXQiOjE1ODgwOTAzNjEsIm5iZiI6MTU4ODA5MDM2MSwiZXhwIjoxNTg4MDk0MjYxLCJhY3IiOiIxIiwiYWlvIjoiQVNRQTIvOFBBQUFBdmR1Rk9Jako4NFZmaWpVb3BsZmc3cHp4eU5Hb0lXTVZzdGU4YU9xY0ZyWT0iLCJhbXIiOlsicHdkIl0sImFwcGlkIjoiYjY3ZmYyN2EtZDc4ZS00OWE4LTk3NjEtMmFkMjZmNzcyMzcyIiwiYXBwaWRhY3IiOiIxIiwiZmFtaWx5X25hbWUiOiJMb3BlcyBTYW5jaGV6IiwiZ2l2ZW5fbmFtZSI6IkZhYnJpY2lvIiwiaXBhZGRyIjoiZmRlNDo4ZGJhOjIwMDA6MzEzODo2ZjIwOjEwMDphMDE6NDA1IiwibmFtZSI6IkZhYnJpY2lvIExvcGVzIFNhbmNoZXoiLCJvaWQiOiI4NzliYzlkOC02MzRmLTQwOTItYTRiMC1kOTE0NzYwYTllMjIiLCJyaCI6IjAuQVRZQVFDalFCUTJ2TkVTUnBKMDdvMlZMc0hyeWY3YU8xNmhKbDJFcTBtOTNJM0kyQUIwLiIsInNjcCI6InVzZXJfaW1wZXJzb25hdGlvbiIsInN1YiI6ImJSQnIwblhPUUdJWUJBRWRuWFRZb2k4V1BhR29DQVNKb3ZnMk1QRXRJS3MiLCJ0aWQiOiIwNWQwMjg0MC1hZjBkLTQ0MzQtOTFhNC05ZDNiYTM2NTRiYjAiLCJ1bmlxdWVfbmFtZSI6ImZhYnJpY2lvLnNhbmNoZXpAYW1lcmljYXN1bml2ZXJzaXR5Lm5ldCIsInVwbiI6ImZhYnJpY2lvLnNhbmNoZXpAYW1lcmljYXN1bml2ZXJzaXR5Lm5ldCIsInV0aSI6IlpVVUZVdVVxcWtHdzBwXzB1UzJCQUEiLCJ2ZXIiOiIxLjAifQ.yGqgztbomtSschX8cOt6q-3-0O6ZsrPw3G6yfhv-E1wwPlDl-YRRGEl0RrYQBOTg7oKHuDO0d6oC7GOfjBo0OLrbwZ4wli1tHmP5r6kb17qq8_JF5k4vFp1zXW65xL9F5EkfuOJV306wEt-51oBTODogcdOBHdqRw4KgIrTwxUZ9Ma5pdhMEofFI9woBX6CkdUfoXtxtIHWZmbQwuoT_2cVpHFWyMN6Q0yjCBjFv4dzBfXEyyjGGmQpX6W8cAXe2HeX-Y13GtUsSmaEPIdj0qc0ww_kl3_5wtE1miYeY1ya9ZxT6JHb0KZEQF6aEdlhlvM8CULVYlz8tG24kqN41MQ
Ocp-Apim-Trace: true

Call’s result is being displayed by Figure 26.

Figure 26. API properly returning countries over APIM’s developers portal

7.2 Testing through Postman

To make sure everything is working, even out-of-the-box (meaning, APIM), we’re going to perform the same API call, but this time, leveraging Postman. Figure 27 shows up the Postman’s configuration and the associated result.

Figure 27. Successful call performed through Postman over APIM

Wrapping up

Done! After all this, we were able to prove that regular App Services (specifically Web Apps) can be privately leveraged towards providing isolated-like environments for web applications.

Some important considerations must be made, though.

  1. This approach doesn’t replace ASE at all. It is nothing, but an alternative for customers looking for lightweight environments that need to be privately hosted.
  2. ASE is recommended if the target environment is either memory or processing sensitive. Also, it should be your first alternative if you’re looking for 100% isolated environments behaving as clusters.
  3. By the time I wrote this article, Azure Private Endpoint for Web Apps was in public preview. Please, refer to this link to get updated information related to it.
  4. The proposed solution takes advantage of a built-in service in Azure to enable the communication between PaaS services under the same VNet (Private DNS). However, it is important to mention that, a custom DNS server properly configured could also serve that purpose.
  5. Solution’s VNet DNS is Azure provided. If you have your own custom DNS set up for your VNet, please, make sure the appropriate routes are in place to make the communication flows.
  6. ILB-based APIM are available under two tiers: “Developer” and “Premium” and it has direct implications on cost. So please, keep that in mind when architecturing something that includes the service.

Hope it helps! See ya.


Diego Melgarejo · May 31, 2020 at 8:44 pm

Awesome article. Great work!

Edmar · March 10, 2021 at 4:40 pm

Great article Fabricio! I’m working in a similar project and have some experience with Azure Apim, observation: I know that private link + private endpoint is great specially to reduce management vnet + nsg, but you could reach the same goal wthout trouble just insert a ip filter on the web app, no ?

    Fabricio Sanchez · March 12, 2021 at 11:16 am

    Hi Edmar – Different goals. I Private Endpoint you not only protect your web app against external access but also, enable vnet level communication.

Joe · May 5, 2021 at 6:09 am

Hi Fabrio,

Why do you use a sinkpool backend? Wouldn’t the Application gateway drop any traffic (i.e withouth /external/* formatting anyways?

    Fabricio Sanchez · May 18, 2021 at 4:21 pm

    Hi Joe,
    Thank you for the question.
    Once a request arrives into the Application Gateway, it has to be routed somewhere. The ‘sinkpool’ will be the destiny of every request that doesn’t meet the internal routing criteria.
    Does that makes sense?
    Hope it helps.

Leave a Reply

Avatar placeholder

Your email address will not be published.