Embedded Linux kernel tuned for virtualisation and determinism

June 25, 2013 // By Nick Flaherty
Win River says it has tackled one of the main objections to using virtualisation in networking and communications applications by optimising the embedded Linux kernel for real time performance.

The Open Virtualization Profile is an add-on software profile for Wind River Linux developed by optimising open source Kernel-Based Virtual Machine (KVM) technology. This provides a real-time deterministic KVM solution, with virtual machine management to allow hypervisor and virtualisation technology to reduce hardware costs and provide software intelligence portability across the network.

The Open Virtualisation Profile allows the deployment of network services on virtual machines without the performance loss associated with using traditional, proprietary IT-like virtualisation products. This real-time approach enables products that can flexibly run intelligent services anywhere on the network, from access right to the core, driving up network efficiency and substantially lowering operational network costs.

The profile includes low latency with less than 3 µsec minimum latency, flexible provisioning of virtual machines, live migration of virtual machines and CPU isolation for advanced security application. It is open source-based and compatible with frameworks such as the Yocto Project , OpenStack, OpenFlow, oVirt and others, with broad support for a variety of guest operating systems

The kernel is integrated with Intel Data Plane Development Kit (Intel DPDK) and supports Intel DPDK Accelerated Open vSwitch

“As networks are pushed to their limits, virtualisation is becoming an increasingly important approach. Operators are looking toward NFV to support the transition to scalable platforms that enable flexible deployment of network services,” said Jim Douglas, senior vice president of marketing at Wind River. “With Wind River Open Virtualization Profile, we are delivering a real-time virtualisation solution that will support the rigorous SLAs of a carrier network and enable them to gain the flexibility, scalability, and cost and energy benefits cloud data centers already enjoy.”

“By moving from a distributed hardware environment to a flexible and virtualised environment or cloud, operators can rapidly deploy new applications and services where and when they are needed instead of updating individual central office locations or hardware,” he said.

Wind River; www.windriver.com