Skip to content

Migrating application from Azure Web app to Container Apps

Aino Vuorio and Olli Uronen Co-Wrote this blog.
image (1)-1 image (2)-Oct-27-2022-08-20-12-8291-AM
Authors of this blog, Olli and Aino both switched careers past year and are working at Knowit as Azure Cloud Consultants. Having started in IT career recently, they have been able to participate in variety of projects with multiple customers and aspects of cloud from security to DevOps to serverless solutions. One of these projects is now introduced in this blog post.

Knowit supports innovation and learning in multiple ways. You can spend some of your time learning different topics, doing certifications or even get support for doing studies and projects outside of work. One of these ways is participating in Innovation Zone projects. Innovation Zone is a collaboration hub that provides employees with the opportunity to discover solutions and try new things and technologies.

Moving application from one Azure service to another

The project revolved around a robot manager application, that was running in a multi-container web app in Azure. The need for the project arose from requirements that could not be filled in the current environment. These requirements included network integration, which the Azure web app in its multi-container mode does not. In addition, the continuity of multi-container web app service is uncertain. Our aim was to try and see, whether it would be possible to move the environment to a recently launched Azure Container Apps. There was also a desire to make the application safer by removing public access to its resources.

The project ended up being far from straightforward, and it is safe to say we learned a lot. In order to help others who are possibly struggling with same issues we came across, in this blog we are sharing our experiences and discoveries during the project. The application we were migrating was not ideal fit with Container Apps, because its architecture is not designed to be a microservice nor serverless. However, the scope and aim of this project was to try and see if the application would run in Container Apps without modifying the application itself. All in all, Azure Container App does seem like a promising service for anyone wanting to work on containers with some flexibility without the complexity that comes with Azure Kubernetes Services.

Project setup

Having started working at Knowit less than six months ago, we were excited to participate in our first Innovation Zone project. Throughout the project we were provided with support in the forms of daily meetings, collaborative problem-solving sessions, and senior knowledge. The contribution of the more senior members of the project group was crucial, especially when we ran into dead-ends or could not solve an issue by ourselves.

What are Azure Container Apps and Container App environment?

Azure Container Apps is the latest serverless platform from Microsoft Azure for running containerized applications and microservices. It requires a Container App environment, that acts as the isolation boundary for all the container apps deployed to it. A single Container Apps environment has one or multiple container apps deployed to it.

Inside the environment all container apps are in the same virtual network that is either automatically created or an existing and manually chosen subnet. Automatically created virtual network cannot be used to deploy service endpoints like PostgreSQL Server and Storage Account, which turned out to be an issue with the project we were working on. Currently changing the virtual network configuration of the Container app environment is not possible. Therefore, creating the Container App environment from the get-go with an existing subnet provides more flexibility when proceeding with future configurations.

Container Apps

Container apps host revision(s). You can have multiple revisions of your container(s) and split traffic between the revisions automatically or give your test users revision specific URL to access the test revision. Inside an environment you may choose to expose container apps publicly, internally only to other container apps inside the environment or not allow any access. The ingress management allows you to adjust which port, only one, different container apps are listening to.

If multiple containers are run in the same container apps revision, they are running in the same “localhost” and can access other containers with simpler URLs. The same happens when docker-compose orchestrated applications are run with Web App or locally with docker-compose. This was also case with the environment the application was currently running in. In the beginning of the project, we played with the idea of just executing a kind of “lift-and-shift” for the application and running the containers in the same revision, but soon enough it became apparent that this was not satisfactory.

It became clear that we wanted leverage the power of Container Apps to the max meaning that we are going to run all containers in their own separate container apps because that provides the best scalability and isolation of services. The FQDN (fully qualified domain name) follows this easy table from Microsoft docs:

Ingress visibility setting FQDN
External <APP_NAME>.<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io
Internal <APP_NAME>.internal.<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io

Changing the application URLs was not hard but something to consider when moving your application from Web App to Container Apps.

Start-up commands are not as simple as we thought

Before we got any containers running, we ran into provisioning problems. Quickly we identified the issue having to do with the container start-up commands. After a while one of the more experienced developers found a solution that required adding the ‘entrypoint’ command to the start-up command. So instead of providing the normal commands needed for container images like this:

- command:
   - minio
   - gateway
   - azure

We had to add the entrypoint command that normally is included in any docker container run automatically. Like this:

- command:
    - /usr/bin/docker-entrypoint.sh
    - minio
    - gateway
    - azure

Database tier and virtual network integration

In the previous environment, the application was using the Basic tier of PostgreSQL. When mapping out the requirements for the improved security of the application, Basic tier proved to be insufficient for our needs.

We needed another tier, and documentation told us that the General Purpose or Memory Optimized pricing tiers support Private Link. PostgreSQL tier could not be upgraded, which forced us to create new server. Any Basic tier backup inside Azure cannot be restored to another tier server, which required us to set up the server, users, and databases from beginning.

The Private Link creation was simple enough thanks to easy portal solution that guides through creation of all the resources. After creating Private DNS zone, NIC and private endpoint to the same virtual network as Container Apps, our PostgreSQL Server was safely away from any public access and still usable the same way as before for our containers.

Volumes, Storage account, File Shares and Blobs

Among the first things we learned was that Volume Mounts do not work the same way as it does locally or in Web Apps. The Azure CLI command to set up container apps does not set volume mounts, and explicitly warns about that.

Luckily File Shares provide a possibility to use persistent storage and/or shared storage space for containers. Those File Shares then need to be connected to the Container Apps Environment and mounted to the container apps by first downloading their YAML files, modifying them and then updating the container apps.

One container use MinIO to provide API access to blob storage. The blob containers were created to the same Storage Account as for the File Shares. In addition, private endpoints, NICs and Private DNS Zones were needed.

Envoy insights from a senior developer

Funnily enough we also found an HTTP-protocol violation in our application when trying to deploy it in Container Apps. Container Apps uses (at least it did at the time of writing this) https://www.envoyproxy.io/ as the edge HTTP proxy to implement ingress. TLS is terminated on the edge and the requests are then routed further down. One of our containers exposes APIs for other containers and the UI to use. At one point of our experimentation, we got stuck in a failing initialization procedure of the system as a whole, essentially rendering the whole system non-responsive and useless. Eventually we were able to address this to a single endpoint returning HTTP "204 No Content" accidentally with 2 extra bytes after the headers. According to the spec "A 204 response is terminated by the end of the header section; it cannot contain content or trailers.

Interestingly the application had been running in production with this faulty endpoint for a few years both on-prem and on Azure without any hiccups. It wasn't until running the API behind Envoy that this was caught as Envoy refused to process the response.

The cons of Container Apps

During our project we ran into several things, that we are hoping will be improved in the future.

Container Apps is indeed a solution for that promises the benefits of Kubernetes services without its complexity. Container Apps runs on Kubernetes and is simplified and ostensibly easier to use than Kubernetes. It hides some of the technical details to make it easier for the user to configure. However, this seemed to work well until it did not. Debugging proved to be far more difficult than expected, since these very tools needed to fix the bugs were amongst those that are hidden from the user.

The easy integration with Azure Key vault was something we ended up missing from the old environment in Web App. In its current state, Container Apps do not support Key vault well. By default, it uses its own secrets section where you may list secrets, but they are not stored in the Key Vault. Another way to use key vault would be to programmatically access Key Vault by using for example Managed Identity, but this is something we did not explore.

Lastly, we ran into some issues with the container probes, that caused the containers to crash. In our case, the crashing was fixed with some other configuration changes, but we did spend fair amount of time in exploring the probes and modifying them. For these parts, the Microsoft Documentation fell short, providing us with only a mention of the default probes without further information of what they included.

What is coming next to container apps?

Container app team is working on creating a possibility to change from one version to another without downtime when using a single revision mode. Currently you would need at least two revisions to pull this off.

Better integration of Key Vault referencing is being looked at too, however, the time frame is open for now.

Another improvement that is being worked on is inbound restriction for HTTP traffic based on IP. Currently you would have to do this by closing the environment to one virtual network and controlling access to that network.

You can check out this link to follow progress on these topics.

Project conclusion

The project was a learning experience for us. We familiarized ourselves with the Container App service and its perks and restrictions, got to understand the logic behind the application we were working with and learned agile methods in working with a team. The Container App and its environment seemed to be a fit with the application and we gained plenty of experience on how to implement it. Knowit Innovation Zone and its projects is something that we are happy to take part in again in the future.