Skip to main content

What is Cloud-native Network Functions (CNF)?

Published on:
.
12 min read
.
For German Version

Cloud-native concepts and technology have shown to be a successful acceleration technique in the construction and continuing operation of the world's biggest clouds. Many Communication Service Providers (CSPs) and other telecommunications organizations have chosen this innovative technology to produce Cloud-Native Network Functions (CNFs), the next generation of Virtual Network Functions (VNFs).

More than fifty percent of respondents to a recent CNCF(Cloud Native Computing Foundation (CNCF)) Microsurvey claimed they would convert between 76 and 100 percent of their PNF (Physical Network Function) and VNF infrastructure to CNF. The second largest group of responders, 23.81%, indicated that they would move between fifty percent and seventy-five percent of their infrastructure to CNF.

These CNFs, when operating inside a telecommunications facility, establish a private cloud using the same public cloud concepts. CNFs include all service provider market segments, including cable, mobile, video, security, and network infrastructure.

Service providers want to minimize OpEx by automating and simplifying their network operations, allowing for a quicker time to market for their services, and deploying across a wide array of cloud settings. Cloud-native technologies offer the core building blocks for developing these apps.

In this article, we will outline the following topics:

  • What does Cloud-native Network Functions (CNF) Mean?
  • What is Cloud-Native Approach?
  • Why Do You Need Cloud-native Network Functions (CNF)?
  • What are the Advantages of Cloud-native Network Functions (CNF)?
  • What are the Challenges of Cloud-native Network Functions (CNF)?
  • How does CNF Work?
  • What is the Deployment Environment of CNF?
  • What are the Use cases for Cloud-native Network Functions?
  • What are the Differences Between CNF and VNF?

What does Cloud-native Network Functions (CNF) Mean?

A Cloud-Native Network Function (CNF) is a software implementation of a network-related application or function that is traditionally done on hardware, such as a router, firewall, network switch, or VPN gateway. The development of CNF technology was enabled by today's cutting-edge server systems. Formerly, application-specific integrated circuits, which were formerly necessary for physical network appliances, were the only way to accomplish this sort of processing capability. Modern server systems' abundant and affordable central processing units and memory resources make it possible for the software to manage network duties. Due to the fact that CNFs are entirely software-based, they use virtual interfaces rather than physical ones. CNF operates inside Linux containers orchestrated by Kubernetes. CNFs in use today include routers, firewalls, virtual switches, and gateways for virtual private networks.

Cloud-Native Network Functions are a subset of Virtualized Network Functions (VNFs) in ETSI NFV standards and are orchestrated as VNFs, i.e. utilizing the ETSI NFV MANO architecture and technology-neutral descriptors (e.g. TOSCA, YANG). CNFs are distinguished from VNFs (Virtualized Network Functions), a component of Network Function Virtualization (NFV), by the orchestration approach. The higher levels of the ETSI NFV MANO architecture (NFVO (NFV Orchestrator) and VNFM (VNF Manager)) collaborate with a Container Infrastructure Service Management (CISM) function that is often built with cloud-native orchestration technologies like Kubernetes.

The following are the properties of Cloud-Native Functions:

  • Small performance footprint, with horizontal scalability.

  • CNFs function independently of the guest operating system, since they operate as containers

  • Standardized RESTful APIs allow containerized microservices to interact with one another.

  • Their Lifecycle is managed by Kubernetes utilizing container image registries such as OCI Docker and OS container runtime.

Cloud Native Computing Foundation (CNCF) aims to promote CNF adoption by promoting and nurturing an ecosystem of open-source, vendor-neutral initiatives. They democratize contemporary design to make these breakthroughs accessible to everybody.

Get Started with Zenarmor Today For Free

What is Cloud-Native Approach?

Being cloud native is a strategy for developing and operating apps that fully use the benefits of cloud architecture. A cloud-native application employs a set of technologies that manage and simplify the orchestration of the application's constituent services. These services, each with its own lifetime, are deployed as containers and linked through APIs. A container scheduler governs where and when a container should be supplied into an application and is responsible for container lifecycle management. Cloud-native apps are intended to be deployable in a variety of contexts, including the public, private, and hybrid cloud. Continuous delivery and DevOps are methodologies used to automate the creation, testing, and deployment of services into a production network.

Cloud-native solutions enable enterprises to create and deploy scalable applications in contemporary, dynamic settings, such as public, private, and hybrid clouds. This strategy is exemplified by containers, service meshes, microservices, immutable infrastructure, and declarative APIs. These strategies allow durable, observable, and manageable systems with loose coupling. In conjunction with strong automation, they enable engineers to make frequent, predictable, and labor-saving modifications with great effect.

Cloud Native architectures are explained below:

  • Containers: Containers are a kind of virtualization that use virtualization at the level of the operating system (OS). A single OS instance is dynamically partitioned into several isolated containers, each with its own readable file system and resource allocation. On both physical and virtual computers, containers may be installed. Containers implemented on bare metal provide virtual machines with performance advantages by reducing hypervisor overhead. Many microservices are deployed per container to suit application and performance needs, such as when the colocation of services logically simplifies the architecture or when services fork multiple processes inside a container.

  • Microservices: An architectural approach that implements business capabilities by structuring an application as a set of loosely connected services. Microservices are often delivered in containers, which allows for the continuous supply and deployment of big, sophisticated systems. As part of an automated system, each microservice is independently deployed, updated, scaled, and restarted, allowing for frequent upgrades to live applications without impacting end consumers.

  • DevOps: DevOps is the use of lean and agile methods to integrate development and operations into a single IT value stream. By using continuous integration and delivery, DevOps helps businesses to develop, test, and deploy software more quickly and iteratively. For instance, DevOps allows the automation of installing and verifying a new software feature in an isolated production environment, which, once proved, can be pushed out to the whole production environment. To truly implement DevOps, service providers must embrace cloud-native approaches, provide automated continuous integration, and offer vendor pipelines.

  • Continuous Delivery: Prepares each individual application update for release without waiting for bundling with other changes into a release or an event like a maintenance window. Continuous delivery makes releases simple and dependable, allowing enterprises to deploy often, with less risk and rapid end-user feedback. This frequency of software consumption by service providers transforms time to market. Eventually, deployment becomes an intrinsic component of the business process and corporate competitiveness, using canary and A/B testing in the actual world as opposed to laboratories.

Why Do You Need Cloud-native Network Functions (CNF)?

Virtualization and VNFs assisted organizations in initiating the transition to cloud-native apps. When successfully implemented, virtualization provided software models with more flexibility by eliminating hardware dependencies. However, there are drawbacks in that VNFs updates are sluggish, restarts are lengthy, CLI remains the primary interface, software was often a lift-and-shift operation, hypervisors like OpenStack were difficult to deploy, there was limited elasticity, and scaling was challenging. Cloud-native apps circumvent these constraints. Cloud-native apps often exhibit the following characteristics:

  • Internal strategies for discovering microservices

  • Dynamic flexibility and scaling

  • Built using microservice architecture (that is, 12-factor apps)

  • Orchestrated using a Kubernetes-like architecture.

  • Enhanced feature speed

  • Resilient services

  • Continuous deployment and automation concepts

  • Reduced footprint with a quick restart

  • Modern health and condition monitoring telemetry

  • Consistent lifecycle management across containers

CNFs encapsulate your physical network functions (PNF) and virtual network functions (VNF) into containers. You get many VNF benefits but VM (Virtual Machine) software overhead is no longer a concern. Containers do not need a guest operating system or hypervisor, and container CNFs may be spun up and down as required.

What are the Advantages of Cloud-native Network Functions (CNF)?

Large enterprises that span numerous geographic areas and need extensive network infrastructures are the primary beneficiaries of CNFs. These sorts of organizations benefit the most from CNFs.

The flexibility, scalability, dependability, and portability of digital service providers that embrace a cloud-native strategy and deploy applications in both centralized and dispersed locations are enhanced. Moving beyond virtualization to a cloud-native architecture helps push to a new level the efficiency and agility required to swiftly launch creative, distinctive offerings that markets and consumers need.

Containers enable users to bundle software (applications, functions, or microservices, for example) with all of the required files for execution while sharing access to the operating system and other server resources. This method enables it simple to relocate the enclosed component across environments (development, test, production, etc.) and even between clouds while preserving its functionality.

This containerization of network architectural components enables the execution of several services on the same cluster and the easier incorporation of previously deconstructed applications, while dynamically routing network traffic to the appropriate pods.

Consequently, the advantages of CNFs include the following:

  • Agility: With CNFs, feature improvements no longer need hardware replacement. Rather, the rollout of a new feature often entails the construction of a new networking microservice and its deployment inside the current infrastructure. The Lego method of app development reduces time to market and gives customers control over the rate of innovation. This drastically reduces the time to market and reduces the cost of new features.

  • Reduced Expenses: Cloud-native networking infrastructure no longer requires the use of specialist hardware. It operates on commodity servers linked in a private cluster, as well as on public cloud infrastructures such as AWS and Google Cloud. With capabilities like auto-scaling, metered pricing, and pay-per-use models, you may fully remove sub-optimal physical hardware allocations and expenses associated with physical hardware upkeep. Reusable services across teams and business divisions save developer and customer OpEx expenses.

  • Fault-Tolerance & Resilience: Containers can be restarted practically instantaneously, microservice-level updates are conducted without downtimes, and automated rapid rollbacks are possible if required.

  • Improved Scalability: A cloud-native solution (CNFs) grows at the level of individual microservices, which can go live and terminate in a fraction of a second depending on the demand for their services. The use of public clouds enables nearly limitless scaling without the need for hardware upgrades.

  • Enhanced Security and Monitoring: The trust in the security of solutions is bolstered by cloud-native tools for security scans and cloud penetration testing. Smaller CNFs can separately regulate subscribers (limit the blast zone) as opposed to the approach of a single large box. Standard technologies like Prometheus, Kubernetes, and Elastic Search give standard container health and status.

  • API Integrations: Microservices architecture enables simpler application programming interface (API) integrations with other platforms for data collection and analysis

  • Green IT: CNFs offer smaller data center footprints for improved energy efficiency.

  • Centralized Management Plane: CNF technology provides centralized administration for network functions.

What are the Challenges of Cloud-native Network Functions (CNF)?

CNFs extend beyond the mere containerization of network operations. To obtain the full benefits of cloud-native principles beyond container packaging, network function software must be further rearchitected, such as by decomposing it into microservices, permitting multiple versions during updates, and utilizing available platform services such as generic load-balancers or datastores.

Moreover, as cloud-native environments become more prevalent, CNFs must coexist with traditional VNFs throughout the transition. To efficiently manage growing demand, expedite installations, and minimize complexity, digital service providers must completely automate the design, implementation, maintenance, and operation of their networks. Standardized processes for configuration and deployment, technologies that have developed in open-source communities, and rigorous testing and certification are more important than ever for service providers today.

How does CNF Work?

A CNF is network functionality offered in software using cloud-native development and delivery methodologies. This capability resides inside the OSI Model layers, which are used to design the network stack. The bottom layers (layers 1 and, in certain situations, layer 2) allow the upper levels (2-7) to carry data. In this situation, these upper layers serve as apps that operate on a network payload (frames, packets datagrams etc). For upgrades, a physical layer 1 networking device must be "flashed" with a whole replacement of its artifacts. The configuration of physical layer 1 is accomplished by the atomic application of a versioned configuration file, which entirely changes the configuration on the device. Virtual Layer 1 (and a portion of Layer 2) is handled via the use of templated images and bootstrapping. In contrast, layers 2 through 7 are administered by orchestration at a higher level or an established control plane (an orchestrator pushing configuration versus a network protocol modifying a route table).

What is the Deployment Environment of CNF?

In a perfect world, each of the NFs (Network Functions) we want to deploy would be cloud-native, containerized, and coordinated by Kubernetes. In this circumstance, the optimal cloud architecture would be a native container environment, such as Red Hat OpenShift or VMware PKS, operating on bare metal servers. For the near future, however, any cloud infrastructure created to enable the deployment of NFs must be capable of supporting both traditional VNFs packed in virtual machines and CNFs packaged in containers.

Currently, this is accomplished by deploying a Kubernetes container environment, such as Red Hat OpenShift or VMware PKS, atop a hypervisor-based virtualization environment, such as OpenStack or VMware vSphere. To do this, you construct a pool of virtual machines using OpenStack or vSphere, and then deploy your Kubernetes cluster supporting your CNFs into that pool. This stacking may appear inefficient, yet it works flawlessly in reality. The main disadvantage is that it requires juggling two distinct application orchestration levels. Kubernetes coordinates the CNFs, whereas each VNF will have its own lifecycle manager, often an ETSI-defined Specific VNFM.

In the future, Kubevirt, a fascinating new technology, may give an elegant answer to this issue. Kubevirt allows the deployment of any VM-based application within a container controlled by Kubernetes. Once the bulk of our NFs is cloud-native, Kubevirt may allow us to simplify our cloud deployments by removing the hypervisor layer while retaining the ability to build and manage VM-based VNFs. Then, Kubernetes does all orchestration.

What are the Use cases for Cloud-native Network Functions?

It is ideal to take advantage of the fundamental advantages of CNFs when a network is expansive and geographically distributed. Therefore, public telecommunications carriers, internet service providers, and cloud service providers are the pioneers in the utilization of these dispersed network operations. These organizations replace their outdated physical or virtual network equipment with containerized CNFs that demand a fraction of the computation, memory, and physical footprints.

As additional cloud or telecom points of presence become operational, these businesses install CNFs and other container services on even smaller platforms, allowing the deployment of mini data centers for edge computing.

The most common use cases for CNFs in the telecom sector are as follows:

  • Cloud-native carrier-grade NAT: A cloud-native carrier-grade NAT solution moves the Network Address Translation (NAT) function and configuration to the cloud infrastructure of the Internet service provider. This solution reduces the required hardware and software characteristics of CPE devices, simplifies the administration and setup of the NAT system, and enables horizontal scalability, upgrades, and failovers with ease.

  • Virtual CPE: In this sample use case, the functionality of the customer-premises equipment is transferred to the cloud infrastructure of the service provider. The only networking equipment that has to be put at the customer's location is an inexpensive L2 switch with no further capabilities that are linked to the ISP's cloud infrastructure. This needs practically little extra maintenance, even if CPE capability is upgraded in the future. The CPE features are constructed as a series of CNFs (networking microservices), which, in addition to facilitating simple administration, scalability, and updates, provide the deployment and modification of customer-specific feature sets on demand.

Enterprises using hybrid and multi-cloud architectures are also interested in CNFs in order to install network services in public clouds that prohibit the use of physical appliances with ease. In these cloud settings, businesses avoid establishing many virtual server network equipment, which can be pretty expensive in the long run. For these enterprises, the adoption of CNFs is primarily motivated by flexibility and cost reductions.

What are the Differences Between CNF and VNF?

Moving network services away from hardware appliances and into software is not limited to CNFs. Virtual network functions (VNFs) are an additional viable option. With a VNF, the hardware-based network appliance's software is moved to a virtual machine (VM). Thus, the sole distinction is that processing and port use are done by software rather than hardware. This is in stark contrast to CNFs that choose the appropriate network services and operate them inside a containerized environment, such as Kubernetes, rather than a VM.

With CNFs, processing and memory allocation are only required for certain services, and services are spread throughout a network based on where they are required. Thus, CNFs provide advantages in terms of efficiency, scalability, and performance when compared to VNFs.