Most of the modern web applications out there are API-based, right? Americas University (AU) is not an exception to that. There are, at least, dozens of applications under their current environment nowadays which rely on different types of APIs to properly work.
It turns out that, over time, Americas University’s technical teams have been able to get themselves up-skilled on Azure, which enabled them to get things done on Microsoft’s cloud platform quicker than never before. As part of their recent deliverables, they’ve got part of these APIs already running into Azure services (most of them in Web Apps under App Service Environment or ASE).
So, as a next step, they came to us asking for more information around how to add an additional layer of protection (infrastructure-wise) to their APIs already deployed in Azure. Our suggestion? API Management (APIM) in partnership with Application Gateway (AG) for reverse proxy, and mainly, for serving as WAF (Web Application Firewall).
So, with this post, I’m going to explore a possible implementation for this scenario. I’m going to guide you not only through the implementation itself (which will be mostly Powershell-based and expands Andrew Kelleher’s script, that is based on Microsoft’s documentation), but also, will give you the details about each step of the process. In the end, you understand how these two pieces of the puzzle can come together to provide network security levels to existing APIs.
Also, worthwhile to mention that this implementation is 100% based in a real scenario, so, when you see “Americas University” throughout the text, you could easily replace it by your company’s name (if that helps).
Digging into the scenario
At this point, you’re already familiar with the challenge we’re going to go through over the upcoming sections in this text. However, there are some important aspects pointed out by AU’s technical teams that we need to cover within the proposed solution so, let’s take a look at those in more detail for awareness.
- API’s are mostly private (meaning, they are only reachable from the same VNet in Azure), but some of them have to be reachable from the outside as they provide information to external applications and services.
- APIs flagged as internal can only be accessed by services and applications from the same network context (peered VNets and VPN/ExpressRoute also applies).
- APIs were deployed into an internal ASE for Linux so, later on, we will need to have APIM service mapping those APIs into ASE.
- WAF is required and should be acting in “Prevention” mode so requests harming security patterns would be dropped right-the-way.
- Later on, Azure AD will be used as identity provider (authorization and authentication) to the running APIs so, APIM must be ready to support that integration natively.
- APIM must be deployed in internal mode (meaning, it should be connected to the existing VNet already in place on Azure) only.
- The solution must bring a unique incoming endpoint for requests arriving into the environment.
- Developers and Dev Leads must be able to access both developer and publisher portals both internally (from a resource living at the same VNet) and externally (from the internet).
- The communication between the two components must necessarily happen under SSL (leveraging existing certificates) protocol.
This way, Figure 1’s architecture summarizes the communication flow that is about to be implemented.
The proposed solution
To address those points mentioned in above’s architecture, as you can see, we’re bringing together two different Azure services: Application Gateway and API Management. These two in front of an ASE (which the implementation and configuration we won’t be covering throughout this post).
AG level, we’re going to set up a mechanism of URL redirection that makes sure the request goes to the right backend pool (you will understand what this is about soon) depending on the “URL format” for the API’s call.
Basically, URLs formatted like
api.americasuniversity.net/external/* will be able to reach out the backend to interact with the requested APIs. Calls formatted as
api.americasuniversity.net/* will be redirected to a dead end (meaning, a backend pool with no target set up) by AG.
Internal calls (the ones coming in from resources at the same VNet) will be accepted and properly mapped by APIM under
Important to mention that, by the requirements brought by AU’s technical teams, developers must be able to manage APIs and its configurations both from internal and external environments. That’s why we are going to add a rule at AG level to properly redirect users under
portal.americasuniversity.net/* to developer’s portal.
Finally, at the APIM level, we will have our APIs set up to accept calls under the following patterns:
Figure 2 showcases some additional details for the proposed solution. Please notice that throughout this post I’m keeping myself focused specifically on the gray-backgrounded area of the flow. Further aspects of this environment will be discussed in a future post so, stay tuned!
Application Gateway (AG)
AG is nothing but a web traffic load balancer (layer-7) which implements a bunch of very useful features that enable customers to manage traffic to web applications. Figure 3 brings a very comprehensive view of AG’s components under-the-hood, which makes the understanding of the whole process a lot easier.
- Frontend IP. An AG can “hear” requests either from the outside (I mean, from the internet directly), from the inside (from within the same virtual network) or both. Either way, the request will arrive at this component – Frontend IP. It can hold a public IP, a private IP or both.
- Listener(s). As the name suggests, this is the component up next in the schema. It does listen requests coming in (to a specific port and under a given HTTP pattern) and properly triggers the associated rule.
- Rule. The rule is the element in place designed to move requests from the frontend (mapped by the listener) to the related backend pool. To make this movement happen, it counts on an “HTTP Setting”, described below.
- HTTP Setting. A small entity, which wraps up some useful configuration for the backend pool. They are port, session stickiness, probe, and timeout.
- Backend pool. A backend pool routes request to backend servers, which serve the request. Backend pools can contain NICs, virtual machine scale sets, public IP addresses, internal IP addresses, FQDN, multitenant backends (such as App Service).
Worthwhile to mention that, Application Gateway has a bunch of very interesting set of features. Among them, we can feature WAF, SSL termination, reverse proxy, session affinity, URL-based routing (this one is going to be crucial for the approach we are getting into), and more. To see a full list of features, click here.
API Management (APIM)
From a 10k-feet view, API Management is a way for us to create a consistent and modern API gateway for existing back-end services.
API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. You can use Azure API Management to take any backend and launch a full-fledged API program based on it.
Component-wise, APIM brings three main building blocks, briefly described below and displayed by Figure 4.
Developer portal. Web UI whereby developers can connect and put some work towards creating APIs’ configuration, products, and further aspects related to the APIs being managed. It also allows developers to learn about their APIs, view and call operations, and subscribe to products.
API Gateway. The APIM engine that runs under-the-hood to provide the management features offered by the service.
Publisher portal. Web UI that allows you to go over some configurations towards customizing the developer portal.
There are plenty of features made available by the service, suitable for different scenarios. For this project, we are especially interested in some of them, highlighting:
- Private (internal) deployment model, which allows us to implement it connected to an existing VNet only being reachable from the inside of the network context.
- Custom domain with certificates. It allows us to customize the way we access the different aspects of the service (API gateway, portal and more), and at the same time, leveraging the certificates to make sure the communication happens in a safer way.
- Mock-up APIs. We mentioned early on: we’re not going to integrate APIM with ASE in this post as it is out-of-scope here, so AU’s technical teams will need to create “mock-up” APIs to simulate environment’s behavior.
To see a full list of features available for API Management service (and I strongly encourage you to do so), please, give a click on this link.
Time to make it happen in Azure. First, we went through a quick assessment of the environment to understand what resources we already had in place to properly structure our Powershell scripts. Here is what we’ve got:
vnet-americasuniversity. The existing VNet (/16) has a couple of resources already deployed (among them, an ASE, a SQL MI and some Bastion machines. Both ASE, SQL MI, and the VMs are already sitting on their specific subnets so, we had to create two new subnets (apim-subnet and appgtw-subnet) to accommodate both APIM and AG services respectively.
- We have also created a new resource group (
AUManager_Shared_Resources) to hold up services deployment.
- Also, we were informed that the two certificates we would need were already in place. To generate both PFX and CER versions of them (format required by Azure services we’re deploying), I went through the process described by myself in another post here in the portal. You can see it here if you would like.
- Finally, we informed AU’s IT team that we will need some (actually two) CNAME records added into the DNS manager for those custom domains we’re going to add to the APIM, so on the proper moment, they will make it happens for us.
That’s pretty much everything we need to gather in advance to start our work, so, let’s get our hands dirty.
Step 1: Creating the resource group and required subnets
To create a new resource group under AU’s selected subscription I ran the portion of code presented below.
Next, I executed the following command sequence to add two new subnets into the existing VNet. Lines 23 and 24 do assign the subnets just created and saves it into variables to be used later on. Number 5 and 6 within the “” indicates subnet’s position under VNet’s subnets array.
We’re now ready to deploy our APIM service. To get here, I executed the script displayed by the snippet below. There are important aspects to be highlighted, though:
- Line 2: We inform Azure Resource Manager (ARM) that the referred APIM service must be deployed within the given VNet + Subnet. This configuration is saved in
- Line 15:
-VirtualNetwork $apimVirtualNetworkapplies the VNet configuration in deployment time.
- Line 16:
-VpnType "Internal"tells ARM that this APIM is private and is not reachable from the outside of the VNet.
- Line 17: We inform ARM that it should deploy APIM under the Developer tier, which does support VNet integration for test purposes (our case here).
If you remember the project’s premises, APIM should support custom domains and communication under SSL so, next, we need to set up the certificates configuration, as we didn’t specify anything on that regard in deployment time. Script’s comments are descriptive enough on clearing what’s actually happening, aren’t they?
Done. At this point, we were able to get our APIM properly configured, including the custom domain piece with its respective certificates. Also, it should present you with a private IP compatible with the subnet under which it was deployed. Figures 5 and 6 display the shreds of evidence.
Great! Now, in parallel, we asked AU’s dev team to get two APIs set up in APIM for tests later on. They would be:
Both these APIs should implement at least a mock version of a Get call. The applicability of it will become clear when we get at that point.
Deploying Application Gateway and its rules
We have part of the solution already in place (APIM), now, it is time to go after the Application Gateway side of it. Never too much to remember AG’s role here: it will serve both as the public interface for external calls and as WAF for the environment as a whole. Through some configuration (path and URL rules), AG will also be filtering which request should be accepted or rejected (at network level).
Ok. Let’s get into it. First, we had to provision a public IP. AG will use it to receive external calls by defining it as the public endpoint for the environment.
Next, we create a new object which holds the AG’s IP configuration. You’ll understand the role it will play soon.
Next, we create a frontend port. This will tell another piece of the configuration “which port” AG should be “hearing” requests on.
Next, we create a new configuration object, called “Frontend IP Configuration”, which ties the existing IP we allocated in our first step to the AG.
Ok, let’s recap so you don’t loose track. By the time being we have:
- A public IP object.
- An AG’s IP configuration object.
- An AG’s frontend port object.
- An AG’s frontend IP configuration that ties the existing public IP object to itself.
Cool. Let’s move on, then. Now, we are about to set up components that will require the SSL certificates so, we need to go after it first. What I do next is get them (certificates) configured for later usage within the AG. I will hold that in two different object variables,
Time for us to create the listeners. A given AG can host multiple listeners, indicating that a given AG can handle multiple sources of requests, depending on its parameters’ configurations. In our case, to support the project’s premises, AG will have to handle calls both to external APIs and the developer’s portal.
Table 1 shows up the difference in the parameter’s values set up for each one of our listeners.
|APIM APIs Listener||APIM Portal Listener|
|Name = apim-gw-listener||Name = apim-postal-listener|
|Protocol = Https||Protocol = Https|
|Listening IP = 13.***.***.42 (***.cloudapp.net)||Listening IP = 13.***.***.42 (***.cloudapp.net)|
|Certificate = api.americasuniversity.net.pfx||Certificate = portal.americasuniversity.net.pfx|
|Hostname = api.americasuniversity.net||Hostname = portal.americasuniversity.net|
As you can see, AG will understand and resolve the requests in two different ways; when they arrive at port 443 under the hostname
api.americasuniversity.net; when they arrive at the port 443 under the hostname
Next, we need to configure the probes for the AG. Probes are a critical here as they will define when and under what circumstances a given request is trying to reach a backend pool “out-of-work”. We have two listeners, two backend pools, so we need two probes as well.
Our requests will need to hit SSL secured backend pools, right? I mean, APIM is already configured to accept encrypted requests only. It turns out that AG will be the element sending the request on external users/apps behalf so, we got to make sure our backends are whitelisted to do so. This is the configuration I’m putting together here.
Cool! Time for us to create the HTTP configurations for each backend pool we are going to deploy later on. This is what I’m doing here.
Time for us to create the backend pools that will effectively process the requests coming into the AG. We have to have two, right? One for the API’s calls (we are going to call it
The other one, specifically to handle when somebody tries to access whatever API out of the scope of
/external/*. It can’t succeed. We have to actively discard those requests, right? The way we do this is by redirecting those requests to a non-targeted backend pool. This is what the next piece of code does. It creates a backend pool with no target at all called
These two backend pools were created by running the portion of the code below.
Backend pools in place. Now, we need to make sure that the backend pool named
apimbackend hits APIM’s internal IP (in our case, it would be 10.1.6.5). I do this by running the piece of code below.
Next, we create a rule that directs the external user to APIM’s developer’s portal. To be valid, rules must have the listener (which listens to the requests referring the portal itself), a valid backend pool (in our case this is going to be
apimbackend) and also, an HTTP setting, which we set up couple steps above and called
We’re almost there. Now, we configure AG’s additional aspects, like name, tier, and capacity. Notice that I’m telling ARM that this AG must have WAF enabled in the “Detection” mode. Also, this is where I’m bringing along the whole set of AG’s objects I created early on.
After it gets deployed, I was able to see the AG service up and running and replicating properly the whole set of configurations we’ve put together so far. Figure 7 displays it in the Azure portal.
There is a missing part yet: AG’s path rules for requests. We didn’t go after it before due to the fact the cmdlets for AG doesn’t allow us to do the URL path and mapping piece in deployment time, so we had to get it (AG) deployed first to then, and only then, add those in.
But, what exactly is missing? Well, we have to:
- Once we have already deployed the gateway, we have now to retrieve its configurations so we can apply the new ones.
- Add a new path rule that will allow AG filtering incoming requests by
/external/*and redirecting those to the proper backend pool (in our case, the existing
apimbackend), and whatever another request to
To make it happen, we need to retrieve from the existing AG the following information: the AG itself; the address to both my backend pools (once I will create path rules to direct to both of them); the listener which waits for requests to
apimbackend; The HTTP setting for the
The code snippet below shows up what I used to retrieve the AG’s data needed.
Next, I created a configuration object that defines a default path route to my requests formatted under /external/*. The variable
$pathRule is the one holding it. As you can see, it puts together filter’s criteria (in our case “/external/*”), the backend pool to hit and also, the HTTP setting for that.
Once I have the path rule I need in place, I got to go ahead and create the mapping who applies it to the AG itself, once the rule alone doesn’t do anything.
Important to notice that, on below’s code, I “tell” AG that my default mapping should direct ALL income requests to
sinkpool backend pool, except the ones defined by the path rule I just configured. That’s how we guarantee the proper map of requests to APIs flagged as external.
Finally, I update the AG with this new configuration. Please, be advised that this update on the service can take a while. If you are coming alongside this implementation, now would be a good moment for you to grab a coffee while waiting for the update’s completion.
We’re almost there! What we just did covers only the rule piece. I mean, the path rule configuration is all about telling AG how to proceed when an occurrence of that given rule appears. That’s not enough, though. Now, we have to clue this first part with the second part of the configuration – the URL routing piece.
First, I create a new configuration object that holds the URL path map. This configuration will allow AG to do tie the URL routing to the existing rule.
Then, with that in place, I can go ahead and create the routing piece.
Finally, I perform a final update on my AG service to get everything settled.
After some time (service’s update time varies depending on the size of the gateway), I got back the info from the Azure portal that my AG was updated and I could verify if my configuration was there. It was, as evidenced by Figure 8.
If you would like to see the full Powershell script for this solution, it is available on Github, through this link.
The final piece of the configuration consisted of adding two CNAME records to the domain’s administrator system. Coincidentally, Americas University’s IT teams moved their DNS management systems into Azure a couple of months ago so it was really easy to get it set up.
As you can imagine, we need these redirect rules due to the fact that we want to redirect every single request hitting both AU’s APIs and APIM’s developer portal to AG’s public endpoint.
So, the mapping we’ve got configured is listed below. Please, keep in mind that the URL after the -> indicates AG’s DNS name.
Figure 9 below shows up the DNS entrances sitting onto AU’s Azure DNS manager.
Testing the solution
We finally reached the point of the text where we test the environment we built (and we did a lot).
Different aspects need to be tested towards to guarantee the proposed solution is properly working. First, we have to prove the calls to APIM’s portal responds properly on the configured URL. Same way, APIs need to respond with expected results to the configured endpoints, respecting the boundaries of being called both from the inside of the VNet and outside of it. We’ll dig into it.
The tests I’ll perform from now on can be seen below.
Test 1: Access APIM’s developer portal externally
From my PC (that is, from the outside of APIM’s VNet), via browser, I’m going to call
Success criteria: I must be able to see APIM’s developers portal.
Test passed, as you can see through the Figure 10.
Test 2 – Access APIM’s developers portal from the inside
From an internal resource (in our case, AU’s technical team provided a Bastion Server deployed at the same VNet as APIM for testing purposes), through a browser, I’ll perform a call to
Success criteria: After configuring the machine’s host file to direct traffic to APIM’s internal IP, I must be able to see APIM’s developers portal.
Test passed, as you can see through the Figure 12.
Test 3 – Calling APIs externally
From my PC (that is, from the outside of APIM’s VNet), via Postman, I’m going to call two different APIs, called testapi1 (which will be flagged as external in APIM), and testapi2 (which will be flagged as internal in APIM), respectively. For the test, I assumed AU’s technical teams had already flagged properly the referred API’s within APIM’s developers portal.
Success criteria 1: HTTP code 200 must be returned for
Test passed, as you can see through the Figure 13.
Success criteria 2: HTTP error code 500 must be returned for
Test passed, as you can see through the Figure 14.
Test 4 – Calling APIs from the inside of APIM’s VNet
From an internal resource (in our case, AU’s technical team provided a Bastion Server deployed at the same VNet as APIM for testing purposes), through a browser from the APIM developers portal, I’ll perform the same calls to the existing APIs.
Success criteria 1: HTTP code 200 must be returned for
Test passed, as you can see through the Figure 15.
Success criteria 2: HTTP code 200 must be returned for
Test passed, as you can see through the Figure 16.
What a journey so far, huh? The good news is: we were able to bring to Americas University a very robust and scalable solution to protect their APIs.
As you can see, there are several pieces in this puzzle to make it happen, however, the way these pieces come along is very smooth, which makes the solution something doable, for sure.
Some important aspects that worth to be mentioned before we close it off:
- There are several options for API Management in the market with very appealing features. APIM is definitely a contender, especially if you have your APIs either sitting on or coming into Azure. You can easily integrate the service to other ones available (like App Gateway, for instance) saving a considerable amount of time, and at the same time, taking advantage of leveraging a service entirely managed by Microsoft.
- Application Gateway once again showed itself a very flexible solution, allowing AU’s technical teams to solve different problems (WAF, a reverse proxy for APIs, and more) with a single deployment (cost savings).
- Important to mention that you’re not tied to Azure infrastructure at all to take advantage of services like Application Gateway and APIM. Both of them can be used to connect to other cloud services that not Azure.
That’s it. Hope it helps!