Vertica and OpenStack

Posted March 30, 2016 by Chris Daly

Database Server Room

Recently, several different customers have asked about deploying Vertica on an OpenStack cloud platform. This is an interesting situation, so to clear up any misconceptions, first we’ll go through a brief description of OpenStack. Then, we’ll suggest questions that you should ask yourself and your OpenStack administrator when deploying Vertica in an OpenStack environment.

A note about this blog: This document contains recommendations and guidance for deploying Vertica on a platform that has not gone through official testing by the Quality Assurance team at Vertica. However, our solutions architects have researched and have done some amount of testing on the best practices recommended here. If you choose to run Vertica on this platform and experience an issue, the Vertica Support team may ask you to reproduce the issue using the recommendations described here, or in a bare-metal environment, to aid in troubleshooting. Depending on the details of the case, the Support team may also ask you to enter a support ticket with your platform vendor.

At the most basic layer, OpenStack is made up of 16 modules that allow a customer to group together hardware and software resources to form a single cohesive cloud that can offer services to its users.  The following modules make up open stack:

  • Compute (Nova)
  • Image Service (Glance)
  • Object Storage (Swift)
  • Dashboard (Horizon)
  • Identity Service (Keystone)
  • Networking (Neutron)
  • Block Storage (Cinder)
  • Orchestration (Heat)
  • Telemetry (Ceilometer)
  • Database (Trove)
  • Elastic Map Reduce (Sahara)
  • Bare Metal Provisioning (Ironic)
  • Multiple Tenant Cloud Messaging (Zaqar)
  • Shared File System Service (Manila)
  • DNSaaS (Designate)
  • Security API (Barbican)

When most people talk about deploying virtual machines in OpenStack, what they are really talking about is using the orchestration module known as Heat to automatically provision virtual machines via the compute module known as Nova.  Nova does not include any virtualization software; it connects to and uses underlying virtualization mechanisms (hypervisors) that run on your hosting servers.

Orchestration can be as complex or simple as the OpenStack administrators wants it to be, but will usually consist of the creation of Virtual Machine and network configuration, and the deployment of an operating system (OS).  In some cases, orchestration can also include configuration of services in the OS, such as Apache Web Server, MySQL databases, or software deployments that are even more complex.  The level of orchestration is defined by the OpenStack administrator, and will vary from deployment to deployment. OpenStack offers a point-and-click interface for setting up compute resources quickly and consistently.

The diagram below illustrates how OpenStack Nova ties together servers running all different types of virtualization technology to form a single cloud entity.  OpenStack Nova interacts with the underlying hypervisors (or even bare metal servers) to automate deployment and management of virtualized servers.

 

openstack1

 

When it comes to deploying Vertica with OpenStack, simply stating that the platform is OpenStack does not provide enough information about the environment.  Because OpenStack can deploy across a wide range infrastructure choices, the question is too general.  That being said, here are some questions that you should be asking your OpenStack administrator to better understand what kind of performance you should expect.

What comprises the underlying deployment layer?

Is this a bare metal install, or virtualized? If it is virtualized – the most common with OpenStack – which hypervisor is being used?

Different hypervisors will have different limitations, and functionality.  Understanding the limitations and functionality of the components of the underlying architecture helps you set performance expectations.

Are the hypervisors over-subscribing resources?

Overprovisioning resources is a common practice in virtualization, because most VMs do not run at 100% resources utilization 100% of the time. For example, your physical server may have 32 CPU cores while the aggregate number of CPU cores of all the VMs on that host may exceed 32. This is considered overprovisioning. By stacking VMs on a server, virtualization can create better overall usage of physical hardware.  However, this can become an issue for applications like Vertica where sharing resources like CPU can result in slow performance. It is important to know whether this overprovisioning situation applies to you.

What is the configuration of the physical servers?

By understanding what the physical server configuration is, it becomes easier to understand how to better size your virtual machine.  The best-case situation for Vertica is allocating an entire physical server to a virtual machine to ensure you are not sharing transport resources such as Network I/O, and Disk I/O with other unknown VMs.

How fast is the storage that is being provided?

In OpenStack, storage is typically provided to the VMs via the Cinder module.  Cinder coordinates storage attach from a pool of storage resources, usually a SAN.  Cinder can have multiple connection type profiles, all with different limits in I/O throughput, and which are configured by the OpenStack administrator.  OpenStack users may have little-to-no visibility into the limitations of the storage being provided when it comes to throughput, which is why we always encourage Vertica administrators to use the VPerf tools to measure performance at the operating system level.  In the case of storage for Vertica, the VIOPerf tool measures read and write performance on a user-defined directory (where the Vertica data files will be kept). The target goal for disk performance is a minimum of 20 MB per CPU core on the virtual machine.

What is the level of hardware redundancy within the physical servers and storage?

While Vertica is a shared-nothing cluster model that replicates data (in a K-Safe configuration), if all the VMs in the cluster exist on a single hypervisor, and that box goes down, you are out of luck.  Similarly, if storage for the VMs is being provided by SAN, you need to ensure that the loss of a single fiber connection does not bring down cluster.  Once the level of hardware redundancy is determined, you can augment it with any necessary software-based redundancy.

How big this the cloud?

Since most OpenStack deployments do not give end-user control to the final physical location of VMs being created, in a large data center, you might experience network latency between Vertica nodes. Latency is caused by distance and network hops between the physical hosts.  To mitigate this issue, you may be able to take advantage of a feature called host aggregation, offered by Nova.  Host aggregation can group host servers together to be used only by certain users.  While this feature can ensure that all the hosting servers for a cluster are co-located in a larger data center, it also removes those hosting servers from general use, and can be seen as a waste of hosting resources.

The bottom line

Cloud-based computing is here to stay, but not all clouds are created equal.  For customers deploying Vertica in OpenStack, understanding the environment setup is necessary to make sure you receive the best possible Vertica cluster performance.  As always, we suggest using the V*Perf tools that are included in the Vertica installation kit to measure and validate configurations, and use that as a basis in discussion with OpenStack administrators to ensure a successful deployment.