Twice a year, community members from more than 60 countries and 1000 companies gather to share OpenStack use cases, perform live software demos and hold free workshops and training. The Summit is a unique opportunity for members of open source communities to meet, exchange ideas and collaborate with each other. Key themes from this year’s US summit included containers,  vGPU and OpenStack at the ‘Edge’. Our team were on the ground in Vancouver to report on these topics and much more first hand.


The summit opens with a keynote from OpenStack Foundation COO, Mark Collier. Collier defines the OpenStack Mission as “forming a community of like-minded people who wanted to build open cloud software. Over time, the community grew to include people who successfully combined OpenStack with many different open source components to automate their infrastructure”.

This community has now grown to 90,000 people across 180 countries that build and operate infrastructure and are actively bringing open infrastructure to new markets.

From edge computing to AI, to Container Security, to CI/CD, the work carried out by the OpenStack community to ensure that the infrastructure our economy relies on is truly open has never been more vital.

Mark Collier, COO OpenStack Foundation delivers his keynote


As proud members of the OpenStack Marketplace, vScaler exhibited at the May Summit showcasing its private cloud appliance and managed services. Visitors enjoyed a live demo of our self-service management portal and were able to spin up clusters in minutes, with the added ability of deployment application-specific frameworks at the click of a switch.

Summit Special Offer

At the event, vScaler announced is free GPU trial offer where customers can avail of free GPU instances within the vScaler platform for a limited time only.  Register for your free trial here.


Containers and Container security were a big theme across the conference with lots of talks and demos focusing on the topic. The biggest technology release news was the announcement of Kata Containers 1.0 (a product of Intel Clear Containers and Hyper runV technologies) which aims to provide a secure runtime environment for containers. The formation of the OpenStack and Kubernetes SIG was a clear indicator that two technologies are being used together to complement one another (both for OpenStack deployment – see the OpenStack Helm project, and for running on top of OpenStack – see the Magnum Project).

Kata containers 1.0

The speed of containers, the security of VMs

Kata Containers have a lightweight VM that helps isolate and secure application container workloads. This not only enables Container-as-a-Service but is also more beneficial to any verticals that require additional security where the Kernel is not being shared. Leverage the speed of bare metal containers but with the workload isolation and security of VMs.

Kata is an excellent fit for both on-demand, event-based deployments such as continuous integration/continuous delivery, as well as longer-running web server applications. It also enables an easier transition to containers from traditional virtualized environments with support for legacy guest kernels and device pass-through capabilities. Kata Containers delivers enhanced security, scalability and higher resource utilization, while at the same time leading to an overall simplified stack.

vScaler Container Support

With access and support for containerized environments such as Docker, Kubernetes and Apache Mesos, vScaler containers are orders of magnitude faster to provision and much lighter weight to build and define when compared to full software builds or virtual machine images. vScaler is actively rolling out support for Kata Containers 1.0.

To find out more about container support within vScaler, get in touch and we will arrange a technical deep dive with a member of the team.


OpenStack and multi-cloud continue their trajectory into the globally distributed cloud. There were a number of interesting use cases for edge computing during the summit, including some interesting discussions around what exactly edge meant to users.

“Offering application developers & service providers cloud computing capabilities, as well as an IT service environment at the edge of a network.” Ildiko Vansca, Ecosystem Technical Lead,  OSF

In a recent whitepaper, OpenStack defines edge computing as “offering application developers and service providers cloud computing capabilities, as well as an IT service environment at the edge of a network. The aim is to deliver compute, storage, and bandwidth much closer to data inputs and/or end users.”

Ildiko Vancsa, OpenStack Foundation and Beth Cohen, Verizon, delivering their keynote on the Edge Computing Initiative

The “edge” in edge computing refers to the outskirts of an administrative domain, as close as possible to discrete data sources or end users. This concept applies to telecom networks, to large enterprises with distributed points of presence such as retail, or to other applications, in particular in the context of IoT.

Edge computing is not and should not be limited to just the components and architectures of OpenStack, but there are some reasons that OpenStack is particularly attractive as a platform for cloud edge computing. The OSF Edge Computing Group is asking the open source community to begin exploring these challenges and possibilities. We recognize that there is work to be done to achieve our goals of creating the tools to meet these new requirements. We welcome and encourage the entire open source community to join in the opportunity to define and develop cloud edge computing. You can find more information about the group activities on the OpenStack Edge Computing web page.


vScaler’s cost-effective ‘Edge’ Bundle includes Ceph and OpenStack in our 2U appliance. For more information and for a competitive quote please contact us.


One of the marquee new features in the Queens release is built-in support for vGPUs, that is, the ability to attach GPUs to virtual machines. Virtual GPU (vGPU) gives users the ability to split a single physical GPU up in to a number of smaller virtual GPUs which can then be attached to instances.

While PCI passthrough has been supported for some time within OpenStack, it has typically only been suitable for workloads that can saturate the full capability of the GPU (for example HPC, deep learning etc.). This advancement will broaden the OpenStack GPU capability by potentially allowing VDI and remote workstations, more efficient use of GPU for AI training techniques and tools.

“Until Queens, the only solution to expose GPUs to guests was PCI passthrough in Nova—effective, but wasteful in terms of resources” Sylvain Bauza, Redhat

A need for throughput – snapshot from Sylvain Bauza keynote

Instead of a single instance having a dedicated (but costly) P100 card attached to it, we can now have up to 16 instances each with a vGPU attached all working from the same P100. The economies of using GPU become much more appealing!


The vScaler team have vGPU implemented successfully in our lab facility and will be rolling this out for our customers shortly. If you are interested in a live demonstration of vGPU, simply get in touch!

Leave a Reply

Your email address will not be published. Required fields are marked *