Phase 8 introduced native columnar storage and a vectorized execution engine to drastically improve the performance of analytical workloads.
Implemented a high-performance column-oriented data store.
- Binary Column Files: Stores data in contiguous binary files on disk, one per column.
- Batch Read/Write: Optimized I/O paths for loading and retrieving large blocks of data efficiently.
- Schema-Defined Layout: Automatically organizes data based on the table's schema definition.
Developed SIMD-friendly contiguous memory buffers for batch processing.
- ColumnVector & NumericVector: Specialized C++ templates for storing a "vector" of data for a single column.
- StringVector: Variable-length string storage for TEXT/VARCHAR/CHAR columns.
- VectorBatch: A collection of
ColumnVectorobjects representing a chunk of rows (typically 1024 rows).
Built a batch-at-a-time physical execution model.
- Vectorized Operators: Implemented
Scan,Filter,Project,Aggregate, andGroupByoperators designed for chunk-based execution. - Batch-at-a-Time Interface: Operators pass entire
VectorBatchobjects between themselves, minimizing virtual function call overhead.
Optimized global analytical queries (COUNT, SUM).
- Vectorized Global Aggregate: Aggregates entire batches of data with minimal branching and high cache locality.
- Type-Specific Aggregation: Leverages C++ templates to generate highly efficient aggregation logic for different data types.
Added VectorizedGroupByOperator for hash-based grouped aggregation.
- Hash-Based Grouping: Uses
unordered_mapfor efficient group key lookup with collision-safe key encoding. - Two-Phase Processing: Input phase builds hash table from batches; Output phase serves grouped results.
- Supported Aggregates: COUNT(*), SUM, MIN, and MAX with INT64/FLOAT64 columns.
- Type-Specific Accumulators: SUM uses separate
sums_int64andsums_float64accumulators to preserve precision for large INT64 values. - Collision-Safe Key Encoding: Group keys use length-prefixed encoding with dedicated NULL markers, preventing key collisions from string concatenation ambiguities.
- Pre-resolved Column Indices: Group-by column indices computed once in constructor to avoid repeated lookups.
Implemented a vectorized hash join with graceful partitioning and batch-based processing.
- Graceful Hash Partitioning: Right table is partitioned into 64 hash buckets (
NUM_BUCKETS) for collision-safe key-based partitioning. - Two-Phase Processing:
BuildRightphase constructs hash table from right relation;ProbeLeftphase probes with left rows. - Resumable Bucket Scanning: Uses
resuming_bucket_scan_,resumed_bucket_idx_,resumed_entry_idx_, andresumed_key_val_to resume interrupted bucket scans when batch capacity is reached, preventing batch overflow during multi-match probes. - Batch Size: Output batches use
BATCH_SIZE(1024 rows) for memory-efficient processing. - Join Type Support: INNER and LEFT joins supported; LEFT join emits unmatched left rows with NULLs for right columns.
- Matched Row Tracking:
left_matched_in_batch_tracks matched rows within the current batch for LEFT join unmatched emission.
As of our latest sprint, we have established a high-performance baseline for the engine's core scanning logic:
- Baseline Speed: 181M rows/s (Sequential Scan).
- Core Technology: Zero-allocation
TupleViewclasses and lazy deserialization. - Comparison: Outperforms SQLite by 9x in raw scan throughput.
This provides the necessary groundwork for future SIMD and full vectorized optimizations.
Successfully verified the end-to-end vectorized pipeline, including columnar data persistence and complex analytical query patterns, through dedicated integration tests.