Release Notes

Vertica
Software Version: 9.1.x

 

Updated: June 20, 2018

About Vertica Release Notes

What's New in Vertica 9.1

What's Deprecated in Vertica 9.1

Vertica 9.1.0: Resolved Issues

Vertica 9.1.0: Known Issues

About Vertica Release Notes

The Release Notes contain the latest information on new features, changes, fixes, and known issues in Vertica 9.1.x.

They also contain information about issues resolved in:

Downloading Major and Minor Releases, and Service Packs

The Premium Edition of Vertica is available for download at my.vertica.com.

The Community Edition of Vertica is available for download at the following sites:

The documentation is available at http://my.vertica.com/docs/9.1.x/HTML/index.htm.

Downloading Hotfixes

Hotfixes are available to Premium Edition customers only. Each software package on the my.vertica.com/downloads site is labeled with its latest hotfix version.

Take a look at the Vertica 9.1.x New Features Guide for a complete list of additions and changes introduced in this release.

What's New in Vertica 9.1.0

Take a look at the Vertica 9.1 New Features Guide for a complete list of additions and changes introduced in this release.

Licensing

AWS Licensing Model

As of Vertica 9.1 you can use a Vertica by the hour license model that provides a pay-as-you-go model where you pay for only the number of nodes and number of hours you use. These Paid Listings are available in the AWS Marketplace:

An advantage of using the Paid Listing is that all charges appear on your Amazon AWS bill rather than purchasing a robust Vertica license. This eliminates the need to compute potential storage needs in advance.

See more: Vertica with CloudFormation Templates

Automatic License Auditing Now Includes ORC and Parquet Data

Vertica 9.1.0 now automatically audits ORC and Parquet data stored in external tables.

Vertica licenses can include a raw data allowance. Since 2016, Vertica licenses have allowed you to use ORC and Parquet data in external tables. This data has always counted against any raw data allowance in your license. Previously, the audit of data in ORC and Parquet format was handled manually. Because this audit was not automated, the total amount of data in your native tables and in external tables could exceed your licensed allowance for some time before being spotted.

Starting in version 9.1.0, Vertica automatically audits ORC and Parquet data in external tables. This auditing begins soon after you install or upgrade to version 9.1.0. If your Vertica license includes a raw data allowance and you have data in external tables based on Parquet or ORC files, review your license compliance before upgrading to Vertica 9.1.x. Verifying that your database is compliant with your license terms avoids having your database become non-compliant soon after you upgrade.

See more: Verify License Compliance for ORC and Parquet Data

Upgrade and Installation

Fixing Unsafe Buddy Projections

As of Vertica 9.1.0, the SELECT and ORDER BY clauses of all projection buddies must specify columns in the same order. Before upgrading to Vertica 9.1 or higher, current users are strongly urged to check that all projection buddies in the current database comply with these new requirements.

See more: Fixing Unsafe Buddy Projections

AWS Installation Methods

As of release 9.1 you can install Vertica on Amazon Web Services (AWS) using the following products in the AWS Marketplace:

Each of these products have three CloudFormation Templatess and one AMI, as follows:

CFTs

AMI

See more: Installing Vertica with CloudFormation Templates

Eon Mode

Eon Mode, a database mode that was previously in beta, is now live.

You can now choose to operate your database in Enterprise Mode (the traditional Vertica architecture where data is distributed across your local nodes) or in Eon Mode, an architecture in which the storage layer of the database is in a single, separate location from compute nodes. You can rapidly scale an Eon Mode database to meet variable workload needs, especially for increasing the throughput of many concurrent queries.

After you create a database, the functionality of the actual database is largely the same regardless of the mode. The differences in these two modes lay in their architecture, deployment, and scalability.

See more: Using Eon Mode

Loading Data

Better Support for S3 Session Parameters

When you read from S3 using COPY FROM with S3 URLs, Vertica uses the configuration parameters described in AWS Parameters. Previously, these parameters could be set only globally, which made it harder to read from different regions or with different credentials in parallel. You can now set these parameters at the session level using ALTER SESSION.

In addition, if you use ALTER SESSION to set an AWS parameter, Vertica automatically sets the corresponding UDParameter used by the UDSource described in Bulk Loading and Exporting Data From Amazon S3.

See more: Specifying COPY FROM Options

Management Console

Provision and Revive an Eon Mode Database

Management Console now provides the ability to revive an Eon Mode database. Eon Mode databases keep an up-to-date version of their data and metadata in their communal storage locations. After the database is shut down, you can restore it later in the same state in a newly provisioned cluster.

The Provision and Revive wizard is provided through a deployment of Vertica and Management Console available on the AWS Marketplace.

See more: Reviving an Eon Mode Database in MC

Monitor External Data

Previously, Management Console only provided monitoring information for internal Vertica tables. In Vertica 9.1.0, MC detects and monitors any external tables and HCatalog data included in your database.

To see this external data visualized, take a look at the Table Utilization charts on the MC Activity page. The table utilization charts on this page now reflect external tables and HCatalog data. The table information displayed now includes table types (external, internal, and HCatalog) and table definitions (applicable only to external tables).

You can also see changes in the Storage View page. When MC detects that your database contains external tables or references HCatalog data, it displays an option to view more details about those tables.

See more: Monitoring Table Utilization and Projections and Monitoring Database Storage

Security and Authentication

Audit Categories

This feature creates a set of audit categories that make it easy to search for queries, parameters, and tables with a similar purpose. There are three types of SQL objects you can audit in Vertica: queries, tables, and parameters. With this feature, you can see system tables that bring together changes to these SQL objects and track them more easily. Use the security and authentication audit category to better understand changes to your database.

This feature introduces four new system tables to better audit changes to your database:

AUDIT_MANAGING_USERS_PRIVILEGES

LOG_PARAMS

LOG_QUERIES

LOG_TABLES

See more: Database Auditing

Data Analysis

Machine Learning for Predictive Analytics

New features include:

See more: Machine Learning for Predictive Analytics

SDK Updates

All C++ and Java UDxs Support Cancellation

All UDx types now support cancellation callbacks. You can implement the CANCEL() function to perform any cleanup specific to your UDx. Previously, only some UDx types supported cancellation.

See more: Handling Cancel Requests

Python UDTFs

The SDK now supports writing user-defined transform functions (UDTFs) in Python, in addition to C++, Java, and R.

See more: Python API and Python SDK Documentation

Apache Hadoop Integration

Delegation Tokens and Proxy Users

An alternative to granting HDFS access to individual Vertica users is to use delegation tokens, either directly or with a proxy user. In this configuration, Vertica accesses HDFS on behalf of some other (Hadoop) user. The Hadoop users need not be Vertica users at all, and Vertica need not be Kerberized.

See more: Proxy Users and Delegation Tokens

Important Changes to Automatic Auditing of Data Usage

Vertica 9.1.0 now automatically audits data stored in external tables in ORC and Parquet format.

Vertica licenses can include a raw data allowance. Since 2016, Vertica licenses have allowed you to use ORC and Parquet data in external tables. This data has always counted against any raw data allowance in your license. Previously, the audit of ORC and Parquet data was handled manually. Because this audit was not automated, the total amount of data in your native tables and in external tables could exceed your licensed allowance for some time before being spotted.

Starting in 9.1.0, Vertica automatically audits ORC and Parquet data in external tables. This auditing begins soon after you install or upgrade to version 9.1.0. If your Vertica license includes a raw data allowance and you have data in external tables based on Parquet or ORC files, review your license compliance before upgrading to Vertica 9.1.x. Verifying your database is complaint with your license terms avoids having your database become non-compliant soon after you upgrade.

See more: Verify License Compliance for ORC and Parquet Data

Apache Spark Integration

The Spark Connector is now distributed as part of the Vertica server installation. Instead of downloading the connector from the myVertica portal, you can now get the Spark Connector file from a directory on a Vertica node.

The Spark Connector JAR file is now compatible with multiple versions of Spark. For example, the Connector for Spark 2.1 is also compatible with Spark 2.2.

See more: Getting the Spark Connector and Vertica Integration for Apache Spark in Support Platforms

Voltage SecureData Integration

Vertica 9.1.0 now integrates with the Voltage SecureData encryption. This feature lets you:

See more: Voltage SecureData

Apache Kafka Integration

Changes to the Kafka Parser Functions

Vertica 9.1 introduces the following new features to KafkaAvroParser and KafkaJSONParsers:

Kafka Integration and Eon Mode

The Vertica integration with Apache Kafka now works in Eon Mode. There are several details to consider when streaming data from Kafka into an Eon Mode Vertica cluster. See Vertica Eon Mode and Kafka for details.

See more: Integrating Apache with Kafka

SDK Updates

All C++ and Java UDxs Support Cancellation

All UDx types now support cancellation callbacks. You can implement the CANCEL() function to perform any cleanup specific to your UDx. Previously, only some UDx types supported cancellation.

Python UDTFs

The SDK now supports writing user-defined transform functions (UDTFs) in Python, in addition to C++, Java, and R.

See more: Handling Cancel Requests and Python API

What's Deprecated in Vertica 9.1

The following Vertica functionality was deprecated:

For more information see Deprecated and Retired Functionality in the Vertica documentation.

Vertica 9.1.0-3: Resolved Issues

Release Date: 06/20/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-62510 AP-Advanced, Sessions

Using an ML function, which accepts a model as a parameter in the definition of a view, and then feeding that view into another ML function as its input relation caused failures in some special cases.

This issue has been fixed.

VER-62836 S3

There was an issue loading from the default region (us-east-1) in some cases where the communal location was in a different region.

This issue has been fixed.

VER-62852 Optimizer - GrpBy & Pre Pushdown

Including certain functions like nvl in a grouping function argument resulted in a "Grouping function arguments need to be group by expressions" error.

This issue has been fixed

VER-62419 Optimizer

Attempts to swap partitions between flattened tables that were created in different versions of Vertica failed, due to minor differences in how source and target SET USING columns were internally defined.

This issue has been fixed.

VER-62076 Hadoop

Hadoop impersonation status messages were not being properly logged.

This issue has been fixed to allow informative messages at the default logging level.

Vertica 9.1.0-2: Resolved Issues

Release Date: 05/30/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-62096 Data Export

Export to Parquet using local disk non-deterministically failed in some situations with a 'No such file or directory' error message.

This issue has been fixed.

VER-62465 Execution Engine, Optimizer

The way null values were detected for float data type columns returned inconsistent results for some queries with median and percentile functions.

This issue has been fixed.

VER-62463 Vertica Log Text Search

Dependent index tables did not have their reference indices updated when the base table's columns were altered but not removed. This caused an internal error on reference.

This issue has been fixed so the indices are updated when you alter the base table's columns.

VER-62144 Error Handling, Execution Engine

If you tried to insert data that was too wide for a VARCHAR or VARBINARY column, the error message did not specify the column name.

This error message now includes the column name.

Vertica 9.1.0-1: Resolved Issues

Release Date: 05/10/2018

This hotfix addresses the issues below.

Issue

Component

Description

VER-62043 UI - Management Console

When creating an Eon Mode database through Management Console on an AWS cluster, entering a valid communal storage location and selecting IAM Role authentication did not enable the Next button.

This issue occurred only during database creation, not when creating an Eon Mode database cluster.

This issue has been fixed.

VER-62063 UI - Management Console

In the Database and Clusters > VerticaDB activity > Detail screen of an Eon database, some column text was not properly formatted.

This issue has been fixed.

VER-62045 Optimizer, Sessions

An internal EE error occurred when running several query retries and, at the same time, sequence objects referenced in the query were dropped and re-created.

This issue has been fixed so the retried query now picks up the re-created sequence.

VER-62148 Cloud - Amazon, UI - Management Console

When entering the Communal Storage URL for an Eon Mode database in the Management Console, some invalid forms of the URL were allowed.

This issue has been fixed.

VER-62118 UI - Management Console

In the Vertica Management Console (MC) in AWS, with some Vertica BYOL licenses, when using Cluster Management to add an instance to a Vertica database cluster, you were prompted to upload a license file even if your Premium Edition license is already installed. In the Add Instance wizard, you saw the message 'Your database exceeds the free Community Edition limits, please upload a valid Premium Edition license'.

This issue has been fixed

Vertica 9.1.0: Resolved Issues

Release Date: 04/30/2018

To see a complete list of additions and changes introduced in this release, refer to the Vertica 9.1 New Features Guide.

Issue

Component

Description

VER-59892 Hadoop, SAL, Sessions Previously Vertica built up sockets stuck in a CLOSE_WAIT state on non-kerberized WEBHDFS connection. This indicates that the socket is closed on the HDFS side but Vertica has not called CLOSE(). This issue has been fixed.
VER-15347 Data load / COPY Previously, COPY created files for saving rejected data as soon as the COPY started. Now COPY waits to create any rejected data files once a row has been rejected.
VER-60277 Optimizer Grouping on Analytics arguments, which are complex expressions, sometimes resulted in an internal Optimizer error causing the database to crash. This issue has been fixed.
VER-58604 Execution Engine, FlexTable An INSERT query caused multiple nodes to crash with a segmentation fault. This issue has been fixed.
VER-53704 Cloud - Amazon, UDX AWS UDX cancellation on long-running operations was not working properly. That is improved in this release, and S3 export transactions get canceled eventually. S3 source still has some cancellation issues, however, S3 source is being deprecated in favor of S3FS which has none of those issues and is much faster.
VER-60368 AP-Advanced Upgrading a cluster from 8.1.1-4 to 9.0.1-2 no errors were reported, but the database would not start after the upgrade. The CLUSTER upgrade changes task kept looping and rolled back with an Invalid model name error. This issue has been fixed.
VER-60454 S3 Before this release some AWS UDX functions including S3EXPORT did not have explicit execution permission, and in some cases, would not be callable due to permission errors. This issue has been fixed.
VER-59055 Execution Engine If a query contained multiple PERCENTILE_CONT(), PERCENTILE_DISC(), or MEDIAN() functions with similar PARTITION BY clauses, and the query also had a LIMIT clause, execution occasionally failed due to a bug during cleanup. This problem has been resolved.
VER-56645 Basics The INSTR() function sometimes missed valid matches when the position parameter was set to a negative value. This problem was resolved.
VER-55542 Execution Engine Queries that specified both LIMIT and UNION ALL clauses failed to complete execution. This issue has been fixed.
VER-44795 Hadoop Sometimes when the datanode is overloaded and/or running out of memory, it starts sending incomplete HTTP messages over WEBHDFS, where the 'Content-Length' field does not correspond to the actual length of the payload. This caused a CURL error. Vertica now tries to recover by requesting the data again with a longer timeout. If it does not succeed, after about 3 minutes Vertica terminates with a message like: "Error Details: transfer closed with 109314115 bytes remaining to read".
VER-59857 Optimizer In some cases, upgrading Vertica introduced inconsistencies in the catalog that caused fatal errors when it tried to update non-existent objects. Vertica now verifies that statistics objects exist before invalidating statistics for a given column.
VER-59567 Catalog Engine Previously the TABLE_CONSTRAINTS system table incorrectly reflected a cached value for the constraint table name. There was no internal corruption. The code has been updated, so that that table name value reflects the correct value and not the cached one.
VER-58529 Optimizer In certain queries with outer joins over simple subqueries, the Optimizer chose a sub-optimal outer table projection. This led to inefficient resegmentation or broadcast of the join data. This problem has been resolved.
VER-59123 Execution Engine Queries with window functions may produce the wrong result intermittently.
VER-57129 Hadoop After a user connected to HDFS using the Vertica realm, users from other realms could not connect to HDFS. This behavior has been corrected.
VER-53488 Catalog Engine Vertica did not previously release the catalog memory for objects that were no longer visible by any current or subsequent transactions. The Vertica garbage collector algorithm has been improved.
VER-61021 DDL When using ALTER NODE to change the MaxClientSessions parameter, the node's state changes from Standby or Ephemeral to Permanent. This issue has been fixed.
VER-59791 Client Drivers - JDBC For lengthy string values, the hash value computed by the JDBC implementation differed from the HASH() function on the Vertica server. This issue has been fixed.
VER-57757 Kafka Integration When using start_point parameter, the KafkaJSONParser sometimes failed to parse nested JSON data, which led to all rows after the first being rejected. This issue has been fixed.
VER-60123 Client Drivers - ODBC The Vertica ODBC driver supports up to 133-digit precision for Numeric types bound to a decimal type. Previously, the Vertica ODBC driver threw a data conversion exception when the precision was over the 133-digit limit. Now, the ODBC driver truncates Numeric values with precision over 133 digits.
VER-53943 Client Drivers - ADO An error handling issue sometimes caused the ADO.NET driver to hang when a connection to the server was lost. This problem has been corrected.
VER-36453 Client Drivers - ODBC The COPY LOCAL statement can appear only once in a compound query. It must be the first statement in a compound query.
VER-60535 Optimizer Running a MERGE statement resulted in the error "DDL statement interfered with this statement". This issue has been fixed.
VER-60887 Backup/DR

Backups to S3 failed when both of the following occurred:

  • You backup to the root of the S3 bucket.
  • The backup location reaches the restorePointLimit.

This issue has been fixed.

VER-59314 Backup/DR Previously, Python script failures on the remote host triggered during a vbr task could result in the error message "No JSON object could be decoded." Vertica now displays a more meaningful error message.
VER-58149 Backup/DR The underneath problem is during restore, at stage of copying over snapshot metadata to the initiator, we didn't make sure that succeeded before using these files. That's why we are seeing this terrible error message. After the fix, we give a more meaningful error if this case occurs.
VER-58068 Scrutinize Sometimes scrutinize times out during diagnostic collection, leading to diagnostic output from a single host instead of from the cluster. The timeouts for scrutinize have been increased.
VER-60510 Kafka Integration Previously, only pseudosuperuser/dbadmin users could create Kafka schedulers. Non-privileged user would need to have operation privileges granted to them in order to use the scheduler. The scheduler tables always belonged to pseudosuperuser/dbadmin users. Now, non-privileged Vertica users can run the vkconfig utilities to create and operate a scheduler directly. The Kafka scheduler's tables automatically belong to the user who created the scheduler.
VER-59994 Optimizer

The default value of MaxParsedQuerySizeMB has changed. The original default was 512MB. This only bound a certain amount of "used" parse memory. The default is now 1024MB. This results in bounding all parse memory. There may be some queries that used to be able to run successfully, that will now encounter the "Request size too big. Please try to simplify the query" error. This is not a regression.

To successfully run the query, increase the value of MaxParsedQuerySizeMB and reset the session.

VER-60042 Optimizer When running a query with one or more nodes down, in some cases an inconsistency in plan pruning with buddy plan resulted in an Internal Optimizer error. This issue has been fixed.
VER-51143 Backup/DR Previously, vbr failed on full restore tasks and copy cluster tasks when there is only one database on the cluster and no dbName parameter specified in the vbr configuration file. This issue has been resolved.
VER-60665 Backup/DR Object restore/replication used to crash a node when restoring/replicating from a backup/snapshot that contains sequences with the default minimum value. This issue is resolved and such sequences are now restored gracefully with the correct minimum value.

Known issues Vertica 9.1

Updated: April 30, 2018

Vertica makes every attempt to provide you with an up-to-date list of significant known issues in each release. We will update this list as we resolve issues and as we learn of new issues.

Known Issues

Issue

Component

Description

VER-61069 Execution Engine

In very rare circumstances, if a Vertica process crashes during shutdown, the remaining processes might hang indefinitely.

Workaround: The remaining processes can be killed using admin tools.

VER-59235 UI - Management Console Previously, the MC LDAP user authentication didn't support changing default search path. Based on the existing design , the default search path is supposed to stay unchanged and served as a base LDAP search path. User should ONLY change the user search attribute to retrieve user information from LDAP server.
VER-60797 License

AutoPass format licenses do not work properly when installed in Vertica 8.1 or older. In order to replace them with legacy Vertica license, users need to set AllowVerticaLicenseOverWriteHP=1.

VER-60642 Data Export

Export to Parquet using local disk can non-deterministically fail in some situations, with "No such file or directory". Re-running the same export will likely succeed.

VER-58168 Recovery

A transaction that started before a node began recovery is referred to as a dirty transaction in the context of recovery. A recovering node must wait for or cancel such a transaction in order to recover tables modified by the transaction. In rare instances such transactions may be hung and can not be canceled. Tables locked by such a transaction can not be recovered and therefore a recovering node can not transition to 'UP'. Usually you can stop the hung transaction by restarting the node on which the transaction was initiated, assuming this is not a critical node. In extreme cases or instances where the initiator node is a critical node the cluster must be restarted.

VER-56679 Nimbus, SAL When generating an annotated query, the optimizer does not recognize that the ETS flag is ON and produces the annotated query as if ETS is OFF. If the resulting annotated query is then run when ETS is ON, some hints might not be feasible.
VER-48041 Admin Tools

On some systems, occasionally admintools cannot parse the output it sees while running SSH commands on other hosts in the cluster. The issue is typically transient. There is no known work-around.

VER-54924 Admin Tools

On databases with hundreds of storage locations, admintools SSH communication buffers can overflow. The overflow can interfere with database operations like startup and adding nodes. There is no known work-around.

VER-48020 Hadoop Canceling a query that involves loading data from ORC or Parquet files can be slow if there are too many files involved in the load.
VER-41895 Admin Tools

On some systems admintools cannot parse output while running SSH commands on hosts in the cluster. We do not know the root cause of this issue. In some situations, if the admintools operation needs to run on just one node, there is a workaround. Users can avoid this issue by using SSH to connect to the target node and running the admintools command on that node directly.

VER-61433 Hadoop Under heavy concurrency, when querying ORC files in a Kerberized High Availability HDFS environment, it is possible for the Vertica process on a single node to crash.
VER-57126 Data Removal - Delete, Purge, Partitioning

Partition operations that use a range, for example, COPY_PARTITIONS_TO_TABLE must split storage containers that span the range in order to complete. For tables partitioned using a GROUP BY, expression such split plans can require a relatively large amount of memory, especially when the table has a large number of columns. In some cases the partition operation may error with "ERROR 3587: Insufficient resources to execute plan on pool <poolname>".

Workaround: Increase the memorysize or decrease the plannedconcurrency of <poolname>.

Hint: A best practice is to group partitions such that it is never necessary to split storage containers. Following this guidance greatly improves the performance of most partition operations.

VER-60409 AP-Advanced, Optimizer

The APPLY_SVD and APPLY_PCA functions are implemented as User-Defined Transform Functions (UDTFs). A query containing a UDTF that takes many input columns (e.g. more than 1000) and also returns many columns may fail with error "Request size too big" due to additional memory requirement in parsing.

Workaround: Increase configuration parameter MaxParsedQuerySizeMB or reduce the number of output columns. For the cases of APPLY_SVD and APPLY_PCA, you can limit the number of output columns either by setting the parameter num_components or by setting the parameter cutoff. In practice, cutoff=0.9 is usually enough.

Please note that If you increase MaxParsedQuerySizeMB to a larger value, for example 4096, each query you run may use 4 GB of memory during parsing. This means that running multiple queries at the same time could cause out-of-memory (OOM) errors if your total memory is limited. Please refer to Vertica documentation for more information about MaxParsedQuerySizeMB.

VER-59147 AP Advanced

Using a machine learning (ML) function, which accepts a model as a parameter, in the definition of a view, and then feeding that view into another ML function as its input relation may cause failure in some special cases.

Workaround: You should always prefix a model_name with its appropriate schema_name when you use it in the definition of a view.

VER-61420 Data Removal - Delete, Purge, Partitioning Partition operations, such as move_partitions_to_table, must split storage containers that have both partitions that match the operations and partitions that do not. Vertica 9.1.0 introduced an inefficiency whereby such a split may separate a storage container into one more storage container than necessary. Any extraneous containers created this way will eventually be merged by the Tuple Mover.
VER-60158 Client Drivers - JDBC Using a single connection in multiple threads could result in a hang if one of the threads does a COMMIT or ROLLBACK without joining the other thread first.
VER-61205 Basics, Catalog Engine

If a configuration parameter in vertica.conf begins with the # character, Vertica crashes and throws an unhandled error. For example:

# LDAPLinkBindPswd = #A^F&pGt2J9S#

LDAPLinkFilterGroup = #cn=EDW*

# LDAPLinkFilterUser = cn=*

Workaround: Avoid using parameter values beginning with "#" in the vertica.conf file.

VER-61584 Nimbus, Subscriptions The VAssert(madeNewPrimary) fails. This occurs only while nodes are shutting down or in unsafe status.
VER-62000 UI - Management Console

When creating an Eon Mode database through Management Console on an AWS cluster, entering a valid communal storage location and selecting IAM Role authentication does not enable the Next button. This issue occurs during database creation only, not when creating an Eon Mode database cluster.

Workaround: On the same wizard page, select “Use AWS Key Credentials”, enter enough text to enable the Next button, then select IAM Role authentication. Then click the Next button.

VER-61362 Nimbus, Subscriptions

During cluster formation, when one of the up-to--date nodes are missing libraries and attempts to recover them, the recovery fails with a cluster shutdown.

Workaround: Copy libraries into the node's Libraries/ directory from a peer node.

VER-61876 UI - Management Console

In the Vertica Management Console (MC) in AWS, with some Vertica BYOL licenses, when using Cluster Management to add an instance to a Vertica database cluster, you are prompted to upload a license file even if your Premium Edition license is already installed. In the Add Instance wizard, the message 'Your database exceeds the free Community Edition limits, please upload a valid Premium Edition license.' appears.

Workaround: Upload your license file again and the instance and database node are added.

 


Legal Notices

Warranty

The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

The information contained herein is subject to change without notice.

Restricted Rights Legend

Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

Copyright Notice

© Copyright 2006 - 2018 Hewlett-Packard Development Company, L.P.

Trademark Notices

Adobe® is a trademark of Adobe Systems Incorporated.

Microsoft® and Windows® are U.S. registered trademarks of Microsoft Corporation.

UNIX® is a registered trademark of The Open Group.


Send documentation feedback to HPE