Highlights:

  • The Oracle Container Engine for Kubernetes has a fully controlled control plane and is entirely compliant with Cloud Native Computing Foundation constructs.
  • According to Oracle, when hosting Kubernetes on competing for public clouds, clients can save up to 50% and benefit from additional services not included in Kubernetes clusters.

Recently, Oracle Corp. unveiled new feature updates to its cloud-based Oracle Container Engine for Kubernetes, claiming that it can streamline operations, lower costs, and increase reliability and efficiency in large-scale systems using the Kubernetes orchestrator for software containers.

The improvements are intended for businesses using agile DevOps methods and constructs like microservices to build and run cloud-native apps on Oracle Cloud Infrastructure. Vijay Kumar, Vice President of product marketing for application development services and developer relations at Oracle, said, “Kubernetes is notoriously complex not only to operate but to find the people with deep skill sets. We’re dramatically simplifying the deployment and operations of Kubernetes at scale.”

50% or Less Costs Compared to Competitive Public Clouds

According to Kumar, Oracle supports Kubernetes in various runtime settings, from bare metal to serverless operations. The Oracle Container Engine for Kubernetes has a fully controlled control plane and is entirely compliant with Cloud Native Computing Foundation constructs. According to Oracle, when hosting Kubernetes on competing for public clouds, clients can save up to 50% and benefit from additional services not included in Kubernetes clusters. Kumar added that Oracle provides uniform pricing across all international zones to reduce complexity.

Leo Leung, Vice President of products and strategy at Oracle, said, “A big piece of Kubernetes is compute and on a computer-by-computer basis, we’re less than 50% of the list price of the lowest-cost region of other providers. Then there are additional parts of Kubernetes that require compute to boot up the cluster, and we’re lower cost there as well.”

The improvements come with virtual nodes, which let businesses run Kubernetes-based apps dependably and at scale without dealing with the operational burden of managing, expanding, upgrading, and troubleshooting the underlying Kubernetes node architecture. With usage-based pricing, Virtual Nodes additionally offer pod-level elasticity.

Leung added, “Customers that are deep into Kubernetes may want to have control over worker nodes to get fine-grained control over the infrastructure, such as running all pods inside bare metal. For the majority of customers, though, we believe serverless is the right answer. They don’t want knobs and dials. They want a service that’s going to scale.”

Encompassing Lifecycle Management

With complete lifecycle management that includes deployment, upgrades, configuration changes, and patching, the improvements provide enterprises with greater freedom in installing and configuring their preferred auxiliary operating software or associated applications. Add-ons include access to optional software operators, including the Kubernetes dashboard, the Oracle database, and Oracle WebLogic, and necessary software deployed on the cluster, such as CoreDNS and kube-proxy.

Controls for identity and access management at the pod level are now accessible. The number of worker nodes by default for newly-provisioned clusters has been raised to 2,000. Support for inexpensive spot instances has been enabled—service-level agreements with financial backing for the worker nodes and API server for Kubernetes.

Having the capacity to grow to thousands more nodes, Kumar said, “you can have a fairly large application running on a Kubernetes cluster without having all the networking between clusters.”