Share this article:

Vertica Integration with DataVirtuality: Connection Guide

For Vertica 8.1

About Vertica Connection Guides

Vertica connection guides provide basic information about setting up connections to Vertica from software that our technology partners create. These documents provide guidance using one specific version of Vertica and one specific version of the third party vendor’s software. Other versions of the third-party product may work with Vertica. However, other versions may not have been tested. This document provides guidance using the software versions described in the following section.

Software Versions Used in this Document

This document describes the integration of Vertica with DataVirtuality. The Partner Engineering team has tested the on-premises version of DataVirtuality in a Windows environment. We used the following software versions:

  • DataVirtuality Studio 2.0.23
  • DataVirtuality Server on-premises 2.1
  • Windows Server 2012 R2 Standard
  • Vertica server 8.1
  • Vertica client 8.1

DataVirtuality Overview

DataVirtuality is a federated relational query engine that lets you access multiple data sources as a single virtual database. You can use standard SQL or client tools to query the virtual database.

This diagram shows the architecture of DataVirtuality.

architecture diagram.png

DataVirtuality software allows you to:

  • Configure Vertica as the analytical storage.
  • Connect multiple data sources, including Vertica, to the DataVirtuality server.
  • Write cross-database joins and queries.
  • Create virtual views to provide only relevant information for business users.
  • Connect SQL and client tools to DataVirtuality Server to generate reports using data in views and data obtained directly from the source tables. Client tools may be visualization tools such as Tableau, database browsers such as DBVisualizer, or spreadsheet managers such as Excel.
  • Optimize reports by accepting DataVirtuality optimizations. When you accept optimizations, query reports are stored in tables in the analytical storage.
  • Run optimized reports. After you accept optimizations, DataVirtuality automatically redirects the queries to data in the analytical storage and not to the live data in the source databases.

About DataVirtuality Server

You can install DataVirtuality Server on premises or as a cloud-based instance on Amazon. In production environments, DataVirtuality Server uses Postgres as the query engine. The trial version of DataVirtuality uses an H2 database.

About Analytical Storage

The analytical storage, also known as the virtualization engine or the logical or internal data warehouse, is a database that DataVirtuality Server uses to materialize query results. DataVirtuality Server only uses analytical storage when you accept query optimizations. By default, DataVirtuality Server accesses live data directly from the data sources without accessing the analytical storage.

The benefit of query optimization is enhanced performance. However, the materialized queries in the analytical storage must be kept up to date. You can choose to refresh the contents of the analytical storage incrementally, or you can perform a full load each time you refresh the source data.

You can choose to query live data from the source, from the analytical storage-hosted data, or from a combination of the two. You can specify which data you want to store in the analytical storage and which data you want to query directly from the source.

Where Does Vertica Fit In?

You can configure Vertica to run as the analytical storage, as a data source, or as both analytical storage and data source.

Important If you use Vertica as the analytical storage, the materialized tables count towards your Vertica license.

Install DataVirtuality Suite

The following sections describe how to install the DataVirtuality Suite.

About DataVirtuality Suite

To install the DataVirtuality software, use the DataVirtuality Suite installer. The installer is available for Windows, Linux, and MacOS. In this document, we provide the installation instructions for the trial version of the DataVirtuality Suite on Windows.

DataVirtuality Suite includes these components:

  • DataVirtuality Server: The query engine that executes the data federation.
  • DataVirtuality Studio: The client tool for managing DataVirtuality Server. DataVirtuality Studio uses the DataVirtuality JDBC driver to connect to DataVirtuality Server.
  • DataVirtuality drivers: JDBC and ODBC drivers to connect to DataVirtuality Server from SQL and client tools.

Download and Install DataVirtuality Suite

For an on-premises installation on Windows, follow these instructions:

  1. Navigate to http://click.datavirtuality.com/start-now-en/and request a trial of DataVirtuality.
  2. Download the DataVirtuality Suite installer provided to you by the DataVirtuality team.
  3. As Windows Administrator, start the installer and select all the components for a full on-premises installation.
  4. Follow the installation steps to install DataVirtuality Suite.

About the Vertica JDBC Driver

DataVirtuality ships the Vertica JDBC driver version 7.1.1 with the DataVirtuality Suite. You do not need to install the Vertica driver, unless you need to connect to a previous version of Vertica or enable new capabilities.

Important DataVirtuality Studio uses this driver to connect to Vertica as a data source or as analytical storage.

If you need a different version of the Vertica JDBC driver, replace it in the following folder:

C:\Program Files\Data Virtuality Suite\DVServer\modules\com\vertica\main

For information about installing Vertica drivers, see the Vertica documentation. Refer also to the DataVirtuality Documentation.

Connect DataVirtuality Studio to DataVirtuality Server

After you install DataVirtuality Suite, you must start DataVirtuality Server and DataVirtuality Studio and create a connection between them.

Start DataVirtuality Server

When installed on Windows for on-premises access, DataVirtuality Server runs as a Windows service. You can start DataVirtuality Server from the Windows Start menu or by using the Service Manager utility in Windows Control Panel.

Start DataVirtuality Studio

You can start DataVirtuality Studio from the Windows Start menu or by double-clicking the executable file, dvstudio.exe, located in the DVStudio folder. The default path on Windows is:

 C:\Program Files\Data Virtuality Suite\DVStudio\dvstudio.exe

Configure the Connection

When you start DataVirtuality Studio for the first time, the Connect to DataVirtuality Server wizard starts:

connectDVserver.png

To configure the connection:

  1. Supply the connection information as follows:

    Field Value Description
    Connections Connection name Select a connection from the list if connections to DataVirtuality Server have previously been configured, or provide information for a new connection.
    Host localhost For DataVirtuality Server on premises, Host is the name or IP address of the machine running DataVirtuality Server. For DataVirtuality Server on the cloud, Host is the URL of the DataVirtuality instance on Amazon.
    Port 31000 Default port is 31000.
    SSL OFF Disabled by default.
    Schema datavirtuality Do not change the schema name.
    User name admin Name of the administrative user. The default value is admin.
    Password admin Password for the administrative user. The default value is admin.
  2. Test the connection.
  3. Click Connect to connect to DataVirtuality Server.

Note Make sure that DataVirtuality Server is up and running before you attempt to connect. If the server is not running, a Connection failed error will display.

Configure the Analytical Storage

After configuring the connection to DataVirtuality Server, you can configure the analytical storage. This step is optional and only required if you intend to accept query optimizations, as described in Accept Optimizations.

DataVirtuality allows you to choose from a variety of databases to host the analytical storage. Follow these steps to configure Vertica as the analytical storage:

  1. Create an empty schema (dwh in this example) in your Vertica database.

    CREATE SCHEMA dwh;
  2. Create a new user and grant read/write privileges to the dwh schema.

    CREATE USER datavirtuality_user;         
    ALTER  USER datavirtuality_user IDENTIFIED BY 'datavirtuality_admin';              
    GRANT ALL PRIVILEGES ON SCHEMA dwh TO datavirtuality_user WITH GRANT OPTION;
    GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA dwh TO datavirtuality_user;
  3. In the Data Explorer of DataVirtuality Studio, right-click on the Analytical Storage node and select Add Analytical Storage.

  4. In the Add Analytical Storage wizard, select HP Vertica as the data source type and click Next.
  5. In the New Analytical Storage window, enter the connection parameters:

    • Hos (required): Vertica server IP address or name
    • Port (required): By default 5433
    • Database (required): The Vertica database name, for example Partner72DB.
    • Schema (required): The schema name in this example is dwh
    • User (required): The database user in this example is datavirtuality_user.
    • Password (required): The database password in this example is datavirtuality_admin.
    • Datasource Parameters (required):

      importer.useFullSchemaName=FALSE,importer.tableTypes="TABLE,VIEW", importer.importIndexes=TRUE
    • Auto-generated parameters:

      importer.schemaPattern=dwh,importer.defaultSchema=dwh
      

      Import settings search and collect all available metadata about the database. This metadata is stored in the schemas SYSTEM and SYS in the DataVirtuality Server data engine (H2 or Postgres database). In the case of Vertica, the setting importer.importIndexes is silently ignored.

    • Translator Parameters: empty by default

      A translator is an interface between DataVirtuality Server and the data source. A translator imports metadata and determines which SQL constructs are supported for pushdown and how data is retrieved.

      In most cases, you do not need to adjust the translator parameters except when your queries use international character sets. To support multibyte characters, you should increase the value of thevarcharReserveAdditionalSpacePercent translator parameter to accommodate the Vertica VARCHAR data type. Most SQL databases, including DataVirtuality Server, calculate the length of VARCHAR in characters. Vertica, however, calculates the length of VARCHAR in bytes. This means that a VARCHAR(X) field in Vertica can sometimes store fewer characters than comparable data types in other systems, especially when international characters are used.

      For information about translator parameters, see the DataVirtuality Documentation.

    • JDBC parameters: Use this field to specify additional JDBC settings such as Native Connection Load Balancing. For a complete list of Vertica JDBC settings see the Vertica documentation.
  6. To add Vertica as the analytical storage, click Test connection then Finish.

To view the objects in the analytical storage, double-click the Analytical storage node (dwh) in the Data Explorer.

Add Data Sources

Before you can use DataVirtuality to query data, you must connect DataVirtuality Server to one or more data sources. You can configure different SQL databases as DataVirtuality data sources.

To add Vertica as a data source using JDBC, follow these steps:

  1. In DataVirtuality Studio, right-click the Data source node.
  2. Choose Add data source.
  3. In the Add data source wizard, select JDBC and HP Vertica.

    adddatasource.png

  4. Click Next.
  5. Enter the connection information for your specific Vertica database.

    • Alias (required): The name of the data source that will appear in the Data Explorer and that you use in the query editor to fully qualify the table names. In this example, the alias is VerticaVMartSource.
    • Host (required): The Vertica server IP address or name.
    • Port: The default is 5433.
    • Database (required): The Vertica database name. In this example, the database name is VMart.
    • User name (required): The database user.
    • Password (required): The database password.
    • Data source parameters:

      This example shows the default import settings with import.schemaPatterntoquery set to the VMart public, store, and online_sales schemas.

      importer.schemaPattern="store,public,online_sales",
      importer.useFullSchemaName=FALSE,
      importer.tableTypes="TABLE,VIEW",importer.importIndexes=TRUE

      The default import settings search and collect all available metadata about the data source. This metadata is stored in the schemas SYSTEM and SYS in the DataVirtuality Server data engine (H2 or Postgres database). You can add other import settings such as importer.importKeys, which causes the keys from data source tables to be visible in the table SYS.Keys.

      Similarly, you can set UseCommentsInSourceQuery=true to label queries with session and request IDs. The label would look like this:

      /*teiid sessionid:Wj37rmp5TLX8, requestid:Wj37rmp5TLX8.15.8*/

      However, import metadata is an expensive process. You may want to limit the import settings to two: importer.schemaPattern and importer.tableTypes. In the case of Vertica, the setting importer.importIndexes is silently ignored.

    • Translator Parameters: Empty by default.

      A translator is an interface between DataVirtuality Server and the data source. A translator imports metadata and determines which SQL constructs are supported for pushdown and how data is retrieved.

      In most cases, you do not need to adjust the translator parameters except when your queries use international character sets. To support multibyte characters, you should increase the value of thevarcharReserveAdditionalSpacePercent translator parameter to accommodate the Vertica VARCHAR data type. Most SQL databases, including DataVirtuality Server, calculate the length of VARCHAR in characters. Vertica, however, calculates the length of VARCHAR in bytes. This means that a VARCHAR(X) field in Vertica can sometimes store less characters than comparable data types in other systems, especially if and when international characters are used.

      For information about the translator parameter, see the DataVirtuality Documentation

    • JDBC parameters: Use this field to specify additional JDBC settings such as Native Connection Load Balancing. For a complete list of Vertica JDBC settings, see the Vertica documentation.

  6. Click Next.
  7. Check Gather Statistics to collect statistics now.

    The reason to collect statistics is that these metrics can help the query engine to make better recommendations for optimizing queries. Some statistics collected include:

    • Table statistics: total number of records in the table

    • Column statistics: number of distinct values, number of null values, and the min and max values in a column.

  8. Click Finish to add the data source.

Run Live Queries Against Multiple Data Sources

After you add data sources, you can start writing SQL queries in the SQL Query Editor. To open a new SQL editor window, choose Open SQL editor from the Window menu or click the Open SQL editor icon. In this editor window you can issue queries against one or many tables from one or many connected data sources. To distinguish between different data sources, you must fully qualify the table names with the appropriate schema name (data source alias) and a dot.

In this example, VerticaVMartSource is the data source name, public is the schema in Vertica, and inventory_fact is the table in Vertica:

VerticaVMartSource.public.inventory_fact

Column names can be referenced in the following form:

VerticaVMartSource.public.inventory_fact.product_key

You can write simple or complex queries that are directed to different data sources. The following example is a SQL query that returns data from Vertica, Oracle and SQL Server:

--Inventory fact table - analysis

SELECT
"VMartVertica.date_dimension".date,
"VMartVertica.date_dimension".full_date_description,
"VMartVertica.date_dimension".day_of_week,
"VMartVertica.date_dimension".calendar_month_name,
"VMartVertica.date_dimension".calendar_month_number_in_year,
"VMartVertica.date_dimension".calendar_year_month,
"VMartVertica.date_dimension".calendar_quarter,
"VMartVertica.date_dimension".calendar_year_quarter,
"VMartVertica.date_dimension".calendar_year,
"VMartSQLServer.product_dimension".product_key || "VMartSQLServer.product_dimension".product_version as product_version_key,
"VMartSQLServer.product_dimension".product_description,
"VMartSQLServer.product_dimension".sku_number,
"VMartSQLServer.product_dimension".category_description,
"VMartSQLServer.product_dimension".department_description,
"VMartSQLServer.product_dimension".package_type_description,
"VMartSQLServer.product_dimension".package_size,
"VMartSQLServer.product_dimension".fat_content,
"VMartSQLServer.product_dimension".diet_type,
"VMartVertica.warehouse_dimension".warehouse_name,
"VMartVertica.warehouse_dimension".warehouse_city,
"VMartVertica.warehouse_dimension".warehouse_state,
"VMartVertica.warehouse_dimension".warehouse_region,
"VMartOracle.inventory_fact".qty_in_stock
FROM
"VMartOracle.inventory_fact"
INNER JOIN "VMartVertica.date_dimension"
ON "VMartOracle.inventory_fact".date_key = "VMartVertica.date_dimension".date_key
INNER JOIN "VMartVertica.warehouse_dimension"
ON "VMartOracle.inventory_fact".warehouse_key = "VMartVertica.warehouse_dimension".warehouse_key
INNER JOIN "VMartSQLServer.product_dimension"
ON "VMartOracle.inventory_fact".product_key = "VMartSQLServer.product_dimension".product_key AND
"VMartOracle.inventory_fact".product_version = "VMartSQLServer.product_dimension".product_version
WHERE
"VMartVertica.date_dimension".date >= '2003-01-01' AND "VMartVertica.date_dimension".date <= '2017-12-31' AND
"VMartSQLServer.product_dimension".discontinued_flag = 0;

DataVirtuality executes separate queries at each source to get the necessary data. DataVirtuality Server applies the filters from the original query at each of the sources and minimizes the data it transfers to the DataVirtuality server data engine. The result sets from each source are transferred to the DataVirtuality server. The DataVirtuality Server data engine performs the joins and the final calculations. After that, the DataVirtuality Server passes the result to the client.

All queries connect live to the original sources unless optimizations have been accepted, in which case DataVirtuality Server uses the data stored in the analytical storage to process the queries.

Configure Virtual Views

DataVirtuality Studio allows you to create views of queries that are complex and used frequently. This allows you to avoid typing the query every time the user wants to see data. You can also use views to select the important data instead of all the data that the original source offers.

To save your query as a view, use the CREATE VIEW command.

=> CREATE VIEW <virtual schema name>.<view name> AS SELECT...

This is an example of a virtual view based on VMart tables from different data sources:

--Average inventory overtime by product department

CREATE VIEW views.avg_inventory_by_product AS
SELECT
"VMartVertica.date_dimension".calendar_year AS "Year",
"VMartVertica.date_dimension".calendar_month_number_in_year AS "Month number",
"VMartVertica.date_dimension".Calendar_month_name AS "Month name",
"VMartSQLServer.product_dimension".category_description AS "Product category",
AVG("VMartVertica.inventory_fact".qty_in_stock) AS "Average quantity in stock"
FROM
"VMartVertica.inventory_fact"
INNER JOIN "VMartVertica.date_dimension"
ON "VMartVertica.inventory_fact".date_key = "VMartVertica.date_dimension".date_key
INNER JOIN "VMartOracle.warehouse_dimension"
ON "VMartVertica.inventory_fact".warehouse_key = "VMartOracle.warehouse_dimension".warehouse_key
INNER JOIN "VMartSQLServer.product_dimension"
ON "VMartVertica.inventory_fact".product_key = "VMartSQLServer.product_dimension".product_key AND
"VMartVertica.inventory_fact".product_version = "VMartSQLServer.product_dimension".product_version
WHERE
"VMartVertica.date_dimension".date >= '2003-01-01' AND "VMartVertica.date_dimension".date <= '2017-12-31' AND
"VMartSQLServer.product_dimension".discontinued_flag = 0
GROUP BY
1, 2, 3, 4
ORDER BY
1, 2

DataVirtuality Server provides a default virtual schema named views to hold all virtual views, but you can create new virtual schemas if necessary. To see all virtual views, click on an appropriate virtual schema inside the node in theVirtual Schemas Data Explorer.

Connect Client Tools to DataVirtuality Server

After you add data sources and create virtual views, you can connect a front-end application to the DataVirtuality server using the JDBC or ODBC drivers supplied by DataVirtuality. For example, you can connect Microsoft Excel to the DataVirtuality server using the DataVirtuality ODBC driver to query and report data from multiple data sources. Excel is a 32-bit application, so you need to install the DataVirtuality 32-bit ODBC driver and create a 32-bit DSN.

Download DataVirtuality Drivers

You can download the DataVirtuality drivers in two ways:

  • Use DataVirtuality Studio to install the drivers on the client machine where the front-end application is running
  • Download the driver that matches your client operating system from http://my_DV_Server:8080.

ODBC Installation and Configuration

To install the DataVirtuality ODBC driver, run the installer and follow the instructions. After you install the ODBC driver, create a system DSN using the Windows ODBC Administrator tool.

To configure the 64-Bit ODBC driver, use the 64-bit version of the ODBC Administrator tool:

%SystemRoot%\system32\odbcad32.exe

To configure the 32-bit ODBC driver, use the 32-bit version of the ODBC Administrator tool:

%SystemRoot%\SysWOW64\odbcad32.exe

This example shows how to use the DataVirtuality 64-bit ODBC driver to create a 64-bit DSN.

DVSQLdriver.png

Parameters:

  • Data Source is an arbitrary DSN name.
  • Database must be datavirtuality.
  • Server is the IP address of the DataVirtuality server.
  • SSL Mode is disable.
  • Port is 35432.
  • User: the DataVirtuality Server username.
  • Password: the DataVirtuality Server password.

Test the connection and then save the DSN. You can now use the DSN in a front-end tool to connect to the DataVirtuality server.

JDBC Installation and Configuration

Install the JDBC driver by placing the jar file datavirtuality-jdbc.jar in the folder for external libraries for the front-end application. Then use the following JDBC URL in the front-end application. Supply the IP address or name of the machine where DataVirtuality Server is running.

 jdbc:datavirtuality:datavirtuality@mm://<my_DV_Server>:31000;SHOWPLAN=ON

Accept Optimizations

Optimizations are query results that DataVirtuality stores in tables in the analytical storage. DataVirtuality recommends that you optimize queries based on how often the queries are issued.

When you accept optimizations, the query results in the source are transferred to the analytical storage where they are available for fast access. DataVirtuality Server automatically detects data that is fully or partially available in the analytical storage and knows when to use stored data instead of querying the original source.

Each optimization has a priority based on a color. Green means that the query is rarely issued. Yellow means that the query is issued quite often. Red means that the query is issued very often. You should accept red-colored optimizations first.

To accept an optimization, go to Optimization View in DataVirtuality Studio. Right-click on the optimization and choose Enable and Run Optimizations. This action creates a table in the analytical storage with the name mat_table_xx_xx and redirects all subsequent queries to this table. You can disable redirecting a query to the analytical storage by reverting the already accepted optimization. Later you can re-enable it by clicking Accept optimization. After re-enabling optimizations, be sure to run or schedule the optimizations to ensure that the analytical storage is kept up to date.

directory.png

Updating Data in the Analytical Storage

DataVirtuality does not automatically refresh the query results in the analytical storage. To ensure that the data in the analytical storage is kept up to date, you must rerun the optimizations. You can schedule optimizations to rerun periodically, based on how often the data changes at the source.

To refresh an optimization, click its name and select Run Optimization. To schedule periodic refreshes, select Add Schedule.

To specify the type of refresh, you must create a replication job. There are several types of replication jobs, including:

  • Complete replication (full load), which replaces the copy of the data in the storage with every refresh.
  • Up to history update (incremental load), which lets you keep track of new data and changes over time. This replication technique is also known as slowly-changing dimensions.

Running Reports on Optimized Data

After you have accepted optimizations and scheduled refreshes, the data needed to populate reports is available in the analytical storage. DataVirtuality Server automatically recognizes that the data is fully or partially stored in the analytical storage and returns it from the tables in the analytical storage instead of the original sources.

DataVirtuality SQL Syntax

DataVirtuality, being a data virtualization product, does not use the Vertica SQL dialect directly. Instead it uses a generic SQL dialect that is independent of the database running underneath it. DataVirtuality provides common SQL constructs. Most SQL constructs are present, but some of them work differently. For example, the to_hex function in Vertica is expressed as to_chars(value,'HEX')in DataVirtuality:

--Original query using Vertica functions and syntax:

SELECT
  ValueDesc,
  to_hex(binary_column) AS binary_column,
  LENGTH(to_hex(binary_column)),
  SUBSTR(to_hex(binary8k_column), 0, 15) AS binary8k_column,
  LENGTH(to_hex(binary8k_column))
  FROM VERT_DATATYPE_v1_0_5_SCHEMA.Binary_Table;

This query using the to_hex function will crash with this error:

queryerror.png

You must rewrite the query and express the to_hex function as to_chars using DataVirtuality syntax:

SELECT
  ValueDesc,
  TO_CHARS(binary_column, 'HEX') as binary_column,
  LENGTH(TO_CHARS(binary_Column, 'HEX')) as length_binary_column,
  SUBSTRING(TO_CHARS(binary8k_Column, 'HEX'), 0, 15) as binary8k_column,
  LENGTH(TO_CHARS(binary8k_Column, 'HEX')) AS length_binary8k_column
  FROM "VerticaDatatypesTest.Binary_Table";

See the DataVirtuality Documentation to find the syntax you should use to submit your query.

The following are examples of DataVirtuality SQL syntax for data type conversion:

  • To define a date literal explicitly, use the escape syntax {d '2003-01-01'}.
  • To convert a date to a string, use cast(datefield as string) or use the formatdatefunction, which is described in the reference guide and which accepts different patterns. For example:formatdate(date, ‘YYYY-DD-MM’)
  • DataVirtuality automatically converts string literals to the implied type. For example: SELECT * FROM my_table WHERE created_by = ‘2003-01-02’

DataVirtuality Documentation

DataVirtuality documentation is installed with DataVirtuality Server. To access the documentation, use the following URL, where my_dv_server is the IP address or host name of the computer where DataVirtuality Server is running:

http://my_dv_server:8080/

To access the documentation, supply the user name and password that you specified in the connection information for DataVirtuality Server.

Troubleshooting

The DataVirtuality log files, located in the DataVirtuality Server directory, are useful for troubleshooting.

The following log file (on Windows) contains information about queries, errors, and other actions performed by the DataVirtuality server:

%pathToDVserver%\standalone\log\boot.log 

For example:

C:\Program Files(x86)\datavirtuality\dvserver\standalone\log\boot.log

DataVirtuality Support for Vertica Data Types

DataVirtuality has the following limitations in its support for Vertica data types:

  • DataVirtuality does not support the Vertica CHAR data type. Only the first character of a string of type CHAR is displayed. To display the full value of strings of type CHAR, cast the column to VARCHAR. For example:
    => SE:ECT CAST(my_char_column AS VARCHAR) AS CharToVarchar FROM my_char_table;
  • DataVirtuality supports a maximum length of 32768 characters for the Vertica LONG VARCHAR data type. Longer strings are truncated to 32768 characters.
  • Time zone offset is not displayed for TimeTz and TimestampTz values.
  • Milliseconds are not displayed for values of data type TIME.
  • Milliseconds are rounded off to the nearest second for data type TimeTz.
  • The Vertica interval data types are not supported. These include: Interval Hour to Second, Interval Hour to Minute, Interval Day to Second, and Interval Year to Month. Queries that include columns with an interval data type are not executed. DataVirtuality uses TIMESTAMPADD and TIMESTAMPDIFF functions instead of interval data types.

For information about DataVirtuality data types, see the DataVirtuality Documentation.

For More Information

For More Information About… … See
DataVirtuality

http://www.datavirtuality.com/en

Vertica Community https://my.vertica.com/community/
Vertica Documentation http://my.vertica.com/docs/latest/HTML/index.htm
Big Data and Analytics Community https://my.vertica.com/big-data-analytics-community-content/

Share this article: