Deploying Kubernetes on Public Clouds is hard – or is it?
Stephan Fabel
on 13 August 2018
Tags: Amazon Web Services , AWS , azure , conjure-up , Deployment , GCP , JAAS , kubernetes , MicroK8s
Automate your Kubernetes deployments on AWS, Azure, and Google
Recently, there’s been talk about how Kubernetes has become hard to deploy and run on virtual substrates such as those offered by the public clouds. Indeed, the cloud-specific quirks around infrastructure provisioning, including storage, networking assets such as load balancers, and overall access control (IAM) differs from cloud to cloud provider. It is safe to assume that it also differs between your on-prem IaaS implementation or virtualized infrastructure and the public cloud APIs.
With all the public Container-as-a-Service (CaaS) offerings available to you, why would you deploy Kubernetes to a generic IaaS substrate anyway? There are many reasons for doing so.
You may…
- …require a specific version of Kubernetes that is not available through one of the CaaS services
- …have to replicate the on-premise reference architecture exactly
- …need full control over the Kubernetes master server
- …want to test new configurations of different Kubernetes versions
Whatever your reasons, in order to make this experience easier, our awesome engineers at Canonical have been working hard on an abstraction layer for the most common API calls in the majority of the popular public cloud options such as Amazon Web Services, Google Cloud Platform, and Microsoft Azure. Others are on the roadmap and will be delivered later this year.
Current, the following API integrations are supported:
Service | Amazon Web Services (AWS) | Google Cloud Platform (GCP) | Microsoft Azure |
---|---|---|---|
Load balancing | Elastic Block Storage (EBS) | GCE Load Balance | Azure Load Balancer |
Block Storage | Elastic Load Balancing (ELB) | GCE Persistent Disk | Azure Disk Storage |
This abstraction layer manifests itself as an overlay bundle on the existing CDK bundle, and connects the Kubernetes-native concepts such as load balancer to the public cloud specific ones, such as AWS ELB. To deploy the Canonical Distribution of Kubernetes (CDK) on a public cloud, all you need to do is add the integrator charm to the existing bundle. Now, when running a command such as
$ kubectl expose service web --port=80 --type=LoadBalancer
the public cloud-native API will be used to create that Load Balancer.
Using the Juju-as-a-Service (JAAS) SaaS Platform
JAAS provides immediate access to a Juju GUI canvas and allows for quick and simple composition of Juju models based on ready-to-run bundles and charms. CDK is available as a juju bundle, and can be added to the JAAS canvas by clicking the “+” button and selecting the production-grade Kubernetes option. Add your credentials to the aws-integrator charm configuration so it knows how to interact with AWS.
For example, in order to provision this integration on top of Amazon Web Services (AWS), simply add the CDK bundle to your JAAS canvas, click the “+” and search for the aws-integrator charm, then add it to your model.
Add relations to both the kubernetes-master and the kubernetes-worker unit, and click “deploy”. You will be asked to enter your credentials for AWS, optionally be able to import your SSH keys into the deployed machines, and JAAS will take care of the rest for you.
Using the command line
If the command line is more appealing to you or if the deployment of a production-grade Kubernetes cluster is part of your CI/CD pipeline, you can use conjure-up either in guided or in headless mode.
To deploy the Canonical Distribution of Kubernetes (CDK) with conjure-up, enter the following on your shell prompt and follow the steps outlined by the install wizard. You can also check out our tutorial for more in-depth usage instructions for conjure-up, as well as our online documentation.
Integrating conjure-up with your CI/CD pipeline
Another mode to use conjure-up is headless mode. You can trigger this by submitting the destination cloud and region on the command line like so:
$ conjure-up canonical-kubernetes google/us-east1
There are more options available, for example, offloading the juju controller instantiation to JAAS, specifying an existing model you’ve deployed in a different manner, and so on. Review the conjure-up documentation to create many other repeatable deployment scenarios.
Summary
Deploying the Canonical Distribution of Kubernetes to a public cloud is easy. You can deploy using conjure-up in both a headless or guided mode, and you can use the Canonical Juju-as-a-Service (JAAS) web interface. Deploying CDK to the public cloud typically takes less than 20 minutes and is easily integrated into your CI pipeline.
CDK is a complete, highly available and resilient reference architecture for production Kubernetes deployments, offered by Canonical with business hours, 24×7 and managed services levels of support. Contact [email protected] for more information.
Have you tried microk8s?
If you develop software designed to run on Kubernetes, the microk8s snap provides the easiest way to get a fully conformant local Kubernetes up and running in under 60 seconds on your Linux, Windos, Mac workstation or virtual machine for test and software development purposes.
Try it today:
$ sudo snap install microk8s --classic $ microk8s enable dns dashboard
or find out more at https://microk8s.io
What is Kubernetes?
Kubernetes, or K8s for short, is an open source platform pioneered by Google, which started as a simple container orchestration tool but has grown into a platform for deploying, monitoring and managing apps and services across clouds.
Newsletter signup
Related posts
How should a great K8s distro feel? Try the new Canonical Kubernetes, now in beta
Try the new Canonical Kubernetes beta, our new distribution that combines ZeroOps for small clusters and intelligent automation for larger production...
Canonical Kubernetes 1.29 is now generally available
A new upstream Kubernetes release, 1.29, is generally available, with significant new features and bugfixes. Canonical closely follows upstream development,...
Turbocharge your API and microservice delivery on MicroK8s with Microcks
Give Microcks on MicroK8s a try and experience the benefits of accelerated development cycles and robust testing.