K8s Meets PCF: Pivotal Container Service from Different Perspectives
Originally posted on altoros.com/blog
Learn how operators, cloud admins, developers, DevOps engineers, and managers get involved in using PKS.
PKS version 1.3
A year ago, Pivotal released Pivotal Container Service (PKS)—a tool enabling operators to deploy, run, and provision enterprise-grade Kubernetes clusters. Then, we provided an overview of PKS v1.0, investigating hands-on experience with the service.
Recently, Pivotal Container Service v1.3 has been released, adding the latest stable version of Kubernetes, support for Microsoft Azure and NSX-T, and some other features. Versions 1.0 and 1.1 are no longer supported and have already been removed from download options. Version 1.2.6 is fully supported and can be used.
For different projects roles, PKS poses different value and interest, so people in these roles ask a variety of service-related questions. Therefore, we’ve decided to approach PKS from different perspectives in an attempt to answer the questions that may bother project managers, cloud admins, platform operators, DevOps engineers, and business logic developers.
From a project manager perspective, PKS is Pivotal’s way to use and deploy a Kubernetes cluster. The service can be used for rapid and simple provisioning of Kubernetes clusters, which can be used as a platform for containerized workloads, both stateful and stateless. PKS can be employed anywhere Kubernetes is useful: as the first step to microservices, as a possibility to run containerized applications without additional efforts, as a storage for stateful services, etc. Pivotal Container Service is the company’s production-grade offering with three key “S” for an enterprise: security, simplicity, and support.
Configuration and administration
From a cloud administrator perspective, like any other system or project, PKS is a set of network and compute resources. It is available for all major public clouds: Google Cloud Platform, AWS (supported from v1.2), Microsoft Azure (supported from v1.3), and VMware vSphere (either with NSX-T or without it). Network setup and routing schema are flexible and vary depending on deployment needs.
To create a PKS instance on Google Cloud Platform, AWS, or Microsoft Azure, a cloud admin manually sets up and configures networking and load balancing. A typical PKS installation will need three networks (pks_subnet, public_subnet, and services_subnet) with a multi-AZ setup to enable high availability.
There are three types of a load balancer in a PKS deployment:
- PKS API Load Balancer is needed to connect to the PKS API responsible for deploying Kubernetes clusters.
- Cluster Load Balancer establishes traffic routing to a Kubernetes cluster. In case of VMware vSphere, the configuration of load balancing is automated through the integrated NSX-Tsolution, which manages traffic routing via the Network Address Translation.
- Workload Load Balancer helps to access containers, which are run on a Kubernetes cluster. In case of GCP, AWS, or VMware vSphere integrated with NSX-T Kubernetes, a master can automatically create this load balancers.
By default, PKS offers three templates for service plan configuration. Still, these templates are not predefined, so a platform operator has to manually assign resources for Kubernetes clusters. Usually, such templates are already predefined based on the average amount of resources needed for “small,” “medium,” or “large” deployments.
With manual resource assignment, an operator relies on his/her experience and intuition solely, so there may be either insufficient or abundant resources. The resources will be created and managed by BOSH—no additional action from a cloud administrator is needed—but quotas, budget, and compute resources amount should be considered.
Pivotal provides some automation for managing PKS-related cloud resources. For AWS, Google Cloud Platform, and Microsoft Azure, Terraform templates are available through the Pivotal Network to help with networking configuration and BOSH provisioning.
Here, we try to summarize what perks an installation of PKS provides for a Kubernetes cluster.
Reliability. PKS relies on BOSH and Ops Manager for deploying Kubernetes clusters. BOSH provides self-healing, canary deployments, and some HA features by default.
Cloud independence. PKS itself provides a uniform experience for any cloud with the possibility to migrate workloads and operation procedures in a similar manner.
Automation. PKS is deployed as a tile to Ops Manager, which features an API for automating a deployment/upgrading circle. Pivotal provides Terraform templates to help cloud administrators with resource automation.
Simplicity. Platform operators use such well-known tools as Ops Manager to deploy, configure, and update the PKS tile. Ops Manager provides an API and the CLI tooling for automation, with a single command needed to configure the kubectl context to access a cluster. Under the hood, PKS is a typical Kubernetes with a possibility to freely use the kubectl CLI.
Flexibility. Kubernetes clusters provided by PKS can be reconfigured. A number of master and worker nodes can be selected; custom workloads are also supported now. BOSH-deployed clusters enable OS-level access to Kubernetes VMs in case deep debugging is needed.
Support. Pivotal provides sufficient support for its services.
Security. PKS supports the current version of Kubernetes—once Kubernetes is upgraded, PKS synchronizes with the changes, too. BOSH stemcells (used as a deployment base) have security patches in place and are frequently updated. PKS supports UAA, which is integrated with a LDAP server to provide access to users with appropriate rights. NSX-T can help in granular ACL configuration for a vSphere deployment. Harbor integration helps to enable vulnerabilities scanning and security review for software packages images.
Backup and recovery. Pivotal provides tools and processes to back up and restore a PKS installation. The BBR tool, which is widely used for Cloud Foundry backup and recovery, can be employed for PKS, as well.
Kubernetes Kubectl CLI
Introducing a user to configuring the CLI, this cheat sheet features a set of essential commands to manage a cluster and collect data from it. The document also explains how to create, group, update, and delete cluster resources. In addition, you will learn how to debug Kubernetes pods, as well as manage configmaps and secrets.