Troubleshooting Tips for the Vertica Catalog

This document was co-authored by Shrirang Kamat and Soniya Shah.

The first section of this document describes how a persistent copy of the Vertica catalog is managed and laid out on disk. This information is useful for advanced Vertica users who are troubleshooting catalog disk space utilization issues.

The second section of this document describes how to run and interpret the output of the dump_allocator_usage API. This information is helpful for database administrators who are troubleshooting the reasons behind the growth of the in-memory catalog.

 

The persistent copy of the Vertica catalog

Vertica stores its persistent copy of the catalog on disk as checkpoints and transaction log files in the Catalog directory.

Commit Record

In Vertica, a commit is associated with a set of version tags named global, local, and system. The persistence layer assigns a global commit ID to ensure global ordering. These tags are stored in the commit record, which is used as a terminator in all persistent catalog files.

Transaction Logs

Transactions logs live under the TxnLogs subdirectory. Each log is named txn_<id1>_<id2>.cat, where id1 is the commit ID of the earliest transaction stored in the file and id2 is the commit ID of the latest transaction. Each transaction is stored as a sequence of serialized objects, terminated by a CommitRecord. Subsequent transactions are appended to the file, and the filename is updated. When a transaction log becomes very large, a new log is started. The size of the transaction log is governed by the CatalogCheckpointChunkSizeKB configuration parameter.

Checkpoints

Checkpoints in Vertica live under the Checkpoints subdirectory. Each checkpoint is stored under a subdirectory, c<id>, where id is the commit ID of the latest transaction included in a checkpoint. If the catalog is large, the checkpoint is split into multiple chkpt_<chunked>.cat files. When the checkpoint is complete, a file named completed is created. This file contains the number of chunks in the checkpoint.

When a new checkpoint starts, Vertica stops appending to any existing transaction log, verifying that transaction logs do not cross checkpoint boundaries. Vertica keeps two complete checkpoints on each node. When a new checkpoint is successfully written, Vertica removes the oldest checkpoint and the transaction logs prior to the commit ID of the oldest retained checkpoint.

When is the persistent copy of the Vertica catalog used?

The persistent copy of the catalog is only read at startup. Upon startup, the most recent checkpoint is read first. If the checkpoint is corrupt or incomplete, a previous checkpoint is read. Then, transactions logs with a commit ID higher than the commit ID of the checkpoint that was read are replayed to bring the checkpoint up to date. A corrupted transaction log will cause the replay to stop.

Which configuration parameters manage the persistence catalog?

The following configuration parameters govern how the persistence catalog is managed. You should not alter the following parameters, unless you are advised to do so by a Vertica support engineer.

  • CatalogCheckpointChunkSizeKB: A checkpoint is split into files of approximately this size. If a transaction log file ever exceeds the size of this parameter, a new log file is started. The default value is 100MB.
  • CatalogCheckpointPercent: When the sum of the active log file size exceeds this percentage, a new checkpoint is signaled. The default value is 50%.
  • CatalogCheckpointMinLogSizeKB: For very small catalogs, this parameter prevents checkpoints until the transaction logs reach the given size. The default value is 10MB.

How can we trigger writing a new checkpoint?

Vertica automatically triggers writing a new checkpoint when the conditions to write a new checkpoint are met.

If you face the unique situation of trying to free space in the Catalog directory, and are an advanced Vertica user, you can use the following commands to write two new checkpoints.

Important: Seek assistance from Vertica support as needed.

You can use the following command to get the current value of the configuration parameter:

=> SELECT get_config_parameter (‘CatalogCheckpointPercent’);

Then, you can change the CatalogCheckpointPercent value:

=> SELECT set_config_parameter (‘CatalogCheckpointPercent’, ‘10’);

Check the checkpoints directory for a new checkpoint and a completed file in the checkpoint directory. Wait until the completed file is written and the older checkpoint is removed:

=> SELECT hurry_service (‘System’, ‘CatalogCheckpointer’);

Repeat the same step to write the second checkpoint:

=> SELECT hurry_service (‘System’, ‘CatalogCheckpointer’);

After the second checkpoint is written, revert the configuration parameter to the default value of 50:

=> SELECT set_config_parameter (‘CatalogCheckpointPercent’ , ‘50’);

Finally, confirm the value of the configuration parameter is set correctly:

=> SELECT get_config_parameter(‘CatalogCheckpointPercent’);

Can we compress files in the Checkpoints and Txnlogs directory?

If the storage location you use for writing the Vertica catalog runs out of disk space, and a node is down and cannot be restarted, your Vertica administrator can compress .cat files in the Checkpoints and Txnlogs directory. Note that files in these directories can only be compressed when a node is in the DOWN state.

Follow these steps:

Change your directory to the Checkpoint directory:

cd Checkpoints/c<id1>
for file in *.cat; do gzip $file; done

Change the directory to another Checkpoint directory:

cd ../c<id2>
for file in *.cat; do gzip $file; done

Change the directory to the TxnLogs directory:

cd ../../TxnLogs
for file in *.cat; do gzip $file; done

Can we drop the incomplete checkpoint to reclaim space?

If you are in a unique situation and are trying to reclaim catalog disk space, you can remove the incomplete checkpoint directory to reclaim disk space. Vertica will never use an incomplete or corrupted checkpoint. Seek assistance from Vertica support as needed.

Troubleshooting Growth in the In-Memory Catalog

This section describes how to use and interpret the output of the dump_allocator_usage API when troubleshooting issues related to the size of the in-memory catalog.

Dump_allocator_usage API

To review the memory usage of an in-memory catalog for a Vertica node, you can connect to the Vertica node and run the dump_allocator_usage API. This API dumps real-time usage of the allocators of two types, by chunks or by object type. Chunks have sizes that range from 2^3 to 2^27. Objects represent all memory usage distinguished by all object types used in Vertica. The dump_allocator_usage API is very useful in determining what contributes to the catalog growth.

The chunk-based allocator (free list)

This allocator is used for the global and the SAL pool in the Vertica catalog. With the global pool, the allocator is used to store minimum and maximum values, and other statistics-related information. With the SAL pool, the allocator is used to store WOS data structures. The output of dump_allocator_usage prints the number of size-based chunks in use, the free list, and memory bytes. The log looks like the following, where 3351 chunks with a size of 2^6 are in use and 433 are on the free list:

Size 2^6: 433 on free list; 3351 still in use (242176 bytes)

The object type-based allocator (typed pool allocator)

This allocator is used to store catalog objects and internal indexes that Vertica creates to access the catalog. The output of dump_allocator_usage prints the number of objects of each type in use, on the free list, the memory required to store each object, and the total memory allocated (in bytes). Catalog object types start with N3Cat<id>ObjectTypeE. Objects that do not start with N3Cat are internal types, such as indexes.

The number of bytes is calculated by multiplying the number of objects by the memory required to store each object. The log looks like the following:

80960 bytes total (2 used / 438 free @ 184) for type N3CAT7MiniRosE

20832 bytes total (0 used / 434 free @ 48) for type St13_rb_tree_nodeI

How can we free catalog memory?

Catalog objects in Vertica are often both long-lived and small; there is significant risk of memory fragmentation without an allocator that co-locates these objects. The catalog allocator never gives memory back after it allocates it, but maintains a free list. Memory on the free list can only be reclaimed if the node or database is restarted.

Configuration parameters that impact the in-memory catalog

The following configuration parameter governs the maximum chunk size that is allocated by Vertica for the tiered catalog pool:

  • MaxTieredPoolScale: Vertica will use malloc instead of talloc, if the memory allocation scale is more than this parameter value.

Is memory used by the in-memory catalog tracked by Vertica resource pools?

Vertica 8.0.1 introduced a read-only resource pool named metadata that automatically tracks the memory used by the in-memory catalog. The system table resource pool status can be used to track the in-memory catalog size for each node.