CategoriesNews Newsletter Resources

Never Miss an Update



Never miss an update – subscribe to our newsletter today!

We strive to bring you the latest news and announcements from the OpenStack Cloud world, as well as keeping you informed of our technology roadmap. Our newsletters include the latest news and announcements from vScaler, demo videos, technical support articles and more.

Subscribe today, by adding your email address below.


CategoriesCase Studies News

Enabling a CSP for a swift market entry with modular multi-platform services



NxAARK Technologies Pvt Ltd is an organisation focused on delivering IT infrastructure Services including DC Design, Build and Operate consultancy services, Consolidated Modular Data Centres and NxAARK Enterprise Cloud.


NxAARK’s primary objective was to find a cost-effective cloud model that would enable it to quickly offer Hosted Enterprise Cloud to its customers at a competitive price point.

NxAARK was keen to work with a partner who can deliver not only the Cloud Platform that would power its own infrastructure, but that was also available in an appliance form. They recognised that working with an established cloud partner would allow them to leverage its technological capabilities while their own team focused on mission-critical tasks and market creation.

After having evaluated multiple global cloud providers, NxAARK zeroed in on vScaler, primarily because of the ease and simplicity of the technology rollout, the features that came as standard and not least – the competitive price point.

“At NxAARK, we strongly believe that Cloud must be ubiquitous and tangible across the India market, so our main goal is to implement Enterprise Cloud POPs across multiple Tier 2 cities along with on-premise Private Cloud implementations – all managed through one single orchestration panel. vScaler has given us the ability to deliver this robust and repeatable cloud infrastructure”.
D. Satyamoorthy, Managing Director – NxAARK Technologies .


The requirement was to set up a Service portfolio that would enable NxAARK as a Cloud Service Provider (CSP). The desire was to source a non-proprietary solution which would be performance optimised out of the box and have the ability to support multiple applications such as HPC, Big Data, and AI/ML. As vScaler is built on Opensource technology there is no vendor lock-in, and it comes with support for Containers, GPU and vGPU as native – offering NxAARK complete flexibility in its choice of Services offerings.


NxAARK found that the key differentiators between vScaler and alternative competitive offerings included multi-level integrations, long leads on solution feasibility, cost sensitivity from a market offering standpoint, performance reliability concerns and high engagement on-going support services.

The vScaler HCI modular solution delivered a fully integrated, single-pane-of-glass, multi-platform, multi-location, hosted/on-premise flexibility and a fast learning curve that aligned with NxAARK’s build-as-you-grow approach. With the implementation being out-of-the-box, vScaler enabled NxAARK to ‘Go-Live’ within weeks. Additionally, vScaler provided Consulting Services on product portfolio builds, including right-sizing the solution for the market, which enabled NxAARK to be business ready once the infrastructure was in place. The key was the vScaler team’s continuous involvement and ownership to ensure a complete partner shape up, not just at a local level but globally.

NxAARK’s intent was to build an offering which would span across hosted enterprise cloud and on-premise private cloud, which it can now deliver thanks to vScaler. As a result, NxAARK is able to compete very well in the market place against Tier 1 cloud providers such as AWS/Microsoft /IBM etc. The subsequent phase of the project includes implementing Cloud POPs in Chennai, Kochi, Coimbatore, Colombo and Pune.

Download the Case Study in full, or get in touch if you would like to find out more.


NxAARK’s fundamental focus is on Information Technology & Infrastructure Services. It has been founded by a team which collectively has more than a 100 years of global market experience in various aspects of the IT business, from Strategy, Sales to Marketing to Channel to Operations and Product development. The diversity and experience that they bring to the table form the pivot of the business model. NxAARK ‘s mission is to take away the IT Infrastructure complexities and help the clients to focus on the core business. IT has transformed from the mundane Computing, Storage, ERP, Data Centres to being more agile and fl exible in the ever-changing market scenario. NxAARK changes all these and delivers a cost efficient and highly available solutions to the market place.



CategoriesNews Tech Articles

ResNet Benchmarks on the NVIDIA™ DGX-2 Server

The worlds of AI and HPC have an insatiable appetite for more and more performance. With the rise of GPUs being used to run a lot of these AI frameworks, it’s only fitting that we put the fastest system in the world to the test. One of NVIDIA’s DGX-2 servers arrived onsite recently, and our engineers have integrated this with our internal vScaler lab facility.

The NVIDIA DGX-2 Server

NVIDIA LogoThe DGX-2 server builds on the success of the DGX-1 server and increases and improves pretty much everything to create a 2Petaflop (tensor ops) monster of a system. Some of the hardware highlights include:

  • 16x V100 32GB GPUs (That’s half a TB of GPU HBM2 memory space when used with
    the CUDA unified memory and cudaMallocManaged()
  • 12x NVSwitch switches providing a non-blocking GPU fabric with 2.4TB/s bisection
  • 800GB of a network trunk to get data in and out (seems overkills for just ssh!)
  • 30TB of local NVMe SSD to keep those GPUs busy

Another tip of the hat needs to go to the NVIDIA GPU Cloud ( as the number of containers/applications/frameworks that are available on this platform is growing daily. Optimised containers across Deep Learning, AI and HPC are readily available and we used the Tensorflow container from this platform for the benchmarking exercise:

$ docker pull

Gone are the days of manually building libraries, matching python versions, source-code hacking and praying!

NVIDIA DGX-2 Server Components

Integration with vScaler

vScaler integration was seamless – we had a preconfigured image that we’ve been using for our DeepOps integration ( and we flashed the system with that (bare metal provision, not virtualised). This provided us with all the tools needed to access the NVIDIA GPU Cloud container repository along with Kubernetes and other optimisation options, all based on Ubuntu Bionic 18.04 LTS.

Benchmark Setup

All benchmarks were run using nvidia-docker, making use of the latest TensorFlow container provided by NVIDIA GPU Cloud (nvidia/tensorflow:18.10-py3), with the imagenet synthetic dataset (provided as part of the tf_cnn_benchmarks).

The benchmark script used was obtained from and we performed a sweep of batch sizes across the tests. All tests were run a number of times and the numbers reported were averaged.

TensorFlow Benchmarking for ResNet Models

To assess the performance of the system we employed the commonly used ResNet Model which is used as a baseline for assessing training and inference performance. ResNet is shorthand for Residual Network and as the name suggests, it relies on Residual Learning (which tries to solve the challenges with training Deep Neural Networks). Such challenges include increased difficulty to train as we go deeper, as well as accuracy saturation and degradation. We selected two common models: ResNet-50 ResNet-152 (Where ResNet50 is a 50 layer Residual Network, and 152 is… well, you’ve guessed it!)

ResNet was introduced in 2015 and was the winner of ILSVRC (Large Scale Visual Recognition Challenge 2015 in image classification, detection, and localisation. There are of course many other Convolutional Neural Network (CNN) architecture models we could have chosen from and in time we hope to evaluate these also. See table 1 for a brief history of the ILSVRC competition CNN architecture models.

1998 LeNet Yann LeCun et al 60 thousand
2012 AlexNet Alex Krizhevsky, Geoffrey Hinton, Illya Sutskever 13.3% 60 million
2013 ZFNet Matthew Zeiler, Rob Fergus 14.8%
2014 GoogLeNet Google 6.67% 4 million
2014 VGGNet Simonyan, Zisserman 7.3% 138 million
2015 ResNet Kaiming He 3.6%

Table 1 : ILSVRC competition CNN architecture models. Ref:

Each model was run using various batch sizes to ensure that each GPU was fully utilised, demanding the highest level of performance from the system. Each combination of batch size and GPU count was tested 3 times over 20 epochs and the average result recorded. Results below show the images processed per second during the network training phase.

Training Command:
python --data_format=NCHW --batch_size=${BATCH_SIZE}
--model=${MODEL} --optimizer=momentum --variable_update=replicated --nodistortions --
gradient_repacking=8 --num_gpus=${NUM_GPUS} --num_epochs=10 --weight_decay=1e-4 --data_dir=/workspace/data --use_fp16 \
Inference Command:
python --forward_only=True --batch_size=${BATCH_SIZE} --model=${MODEL}
--num_epochs=10 --optimizer=momentum --distortions=True --display_every 10 --
num_gpus=${NUM_GPUS} --data_dir=./test_data/fake_tf_record_data/ --data_name=imagenet

Comparing the average images per second of each model for a fixed batch size and varying GPU count shows the near linear performance increase for each GPU added. For example, when running ResNet-50 with a batch size of 256, going from 1 GPU to 16 GPUs results in a scaling factor of 13.9 (which represent an 86% efficiency in scaling). We’re confident that with some tweaking we can improve this further.

ResNet 50 Training
ResNet 152 Training
ResNet 50 Inference
ResNet 152 Inference

During the tests we monitored the system power draw through the onboard sensors and captured data points using ipmitool. Below is a chart of the power draw over time, as the tests iterated through the models, batch sizes and number of GPUs.

Power draw during benchmarking

Thanks to the tech team in the lab for their work on this – stay tuned for more results on the NVIDIA DGX-2 server as we look at other applications and workloads. For more information on the NVIDIA™ DGX-2 Server visit

Have you something interesting to run on the DGX-2 Server?

Request a callback:

CategoriesCase Studies News

vScaler empowers teaching at Keele University

vScaler is now a key component of the MSc Advanced Computer Science course at Keele University, not only to teach theoretical cloud concepts but also to apply and evaluate practical solutions for researchers.


Keele University sought a simple to use and cost-effective cloud-based platform for research and development on the Advanced Computer Science Masters course. The objective was to find a platform that enabled High-Performance Computing (HPC), Big Data and software containers in a training environment.


vScaler was installed onsite, providing the University with an optimised private cloud platform, with built-in application stacks to support HPC on demand and Big Data workloads. The platform supports Docker and Kubernetes which provide application portability with other container-based systems.

The platform is currently being used by academics, demonstrators, researchers and teaching staff alike, and has been directly embedded into teaching at the School of Computing and Mathematics which runs the MSc Advanced Computer Science course. One of the key modules of the course is ‘Cloud Computing’. This module teaches the fundamentals of cloud technologies such as different delivery methods, infrastructure and data mechanisms. vScaler is a key component of the MSc course, not only to teach the theoretical cloud concepts but also to apply and evaluate practical solutions.

“vScaler empowers teaching to educate the next generation of computer scientists, and to provide unique and on-demand resources for research”. Dr. Misirli, Lecturer, Keele University

Lectures are supported by vScaler workshops, where students learn how to set up their own cloud infrastructure and develop software. vScaler is also part of the coursework to develop cloud applications and to critically evaluate the processes behind cloud technologies.

The platform is also used to support third-year dissertation projects which may require custom requirements. Due to the variety and number of projects changing every year, vScaler is an important tool that enables the staff to provide a flexible infrastructure for students.

vScaler is also used by the staff in research projects. Computational resources required for research vary greatly based on the project and time-line. It is crucial to support the staff with appropriate resources at any given time to generate reproducible scientific data. vScaler provides a unique opportunity to utilise and distribute on-demand computational resources at the School. This resource is also used in funding applications of different projects.

Download the Case Study in full, or get in touch if you would like to find out more.


Keele University is proud to be joint No.1 in England for Course Satisfaction in the Guardian University League Table 2019, in addition to having been ranked No.1 in England for student satisfaction in the 2018 National Student Survey, of broad-based universities. The University was also awarded Gold in the 2017 Teaching Excellence Framework. For more visit