Automating IVL
Manual IVL — clicking Import, Validate, Load for each Workflow Profile each month — works for a handful of data sources. But as the number of Workflows, Entities, and source files grows, automation becomes essential. This guide covers the batch loading API, Event Handlers, and the partitioning strategies that make automation reliable at scale.
Batch File Loading
OneStream provides a built-in batch processing engine that can import and process files through the entire Workflow pipeline — from Import through Certify — without manual intervention.
Setting Up Batch Loading
The batch process follows these steps:
- Create a Batch Processing Extender Business Rule — This rule calls the OneStream batch API function
- Create a Data Management Sequence — Configure the sequence that will orchestrate the batch
- Prepare files with correct naming — Batch files must follow a specific naming convention that tells OneStream which Workflow Profile to target
- Copy files to the Batch Harvest folder — Located at: System Tab → File Explorer → File Share → Applications → [App Name] → Batch → Harvest
- Execute the batch — Run the Data Management Sequence manually or on a schedule
The ExecuteFileHarvestBatchParallel API
The primary API for automated batch loading:
This single API call orchestrates the entire IVL pipeline (and optionally beyond) with parallel processing.
Parameters
| Parameter | Type | Description |
|---|---|---|
si | SessionInfo | The current session info object |
fixedScenario | String | The Scenario to target (e.g., "Actual") |
systemTime | String | The time period to load (e.g., "2026M1") |
valTransform | Boolean | Whether to run Transformation Validation |
valIntersect | Boolean | Whether to run Intersection Validation |
loadCube | Boolean | Whether to load validated data to the Cube |
processCube | Boolean | Whether to run Process Cube after loading (consolidation calculations) |
confirm | Boolean | Whether to run Confirmation Rules |
autoCertify | Boolean | Whether to auto-certify after all steps complete |
ParallelCount | Integer | Number of parallel threads for processing |
Batch File Naming Convention
Batch files must follow a specific naming format so OneStream knows which Workflow Profile, Scenario, Time, and Load Method to use:
The file name portion maps to the Workflow Profile. OneStream parses the name to determine the target. See the Design and Reference Guide section on "Batch File Name Format Specification" for the complete naming rules.
Batch Processing Results
Each batch process creates a detailed Task Activity entry with:
- Overall status results for the batch
- Detailed information about each processed file
- Status of each Workflow step (Import, Validate, Load, etc.)
The batch function also returns a detailed results object to the Extender Business Rule. This object can be programmatically evaluated to create custom reporting and notifications.
The Non-Parallel Variant
For simpler scenarios, there is also:
This processes files sequentially rather than in parallel. Use it when parallelism is not needed or when you want simpler debugging.
Event Handlers
Event Handlers are Business Rules that fire automatically when specific events occur during the Workflow lifecycle. Two types are relevant to IVL automation:
Workflow Event Handler
Fires when the Workflow completes a specific step. Common IVL uses:
| Event | When It Fires | Example Use |
|---|---|---|
| After Import completes | When the Import task finishes | Auto-generate an export file from the staged data |
| After Validate completes | When Validation passes | Send a notification to the finance team |
| After Load completes | When Cube load finishes | Trigger a downstream calculation or report |
Transformation Event Handler
Fires at various points during the Import-through-Load pipeline. These provide more granular control than Workflow Event Handlers:
| Sub-Event | When It Fires | Example Use |
|---|---|---|
dValidateTransform | After Transformation Validation runs | Custom logic to auto-fix common mapping errors |
| After Import | When data hits the Stage tables | Auto-export transformed data to an external system |
Event Handler Best Practices
Example: Automating Transformation Validation in an Event Handler:
Partitioning Strategy for Large Volumes
For enterprise-scale data loads (millions of rows), partitioning is critical. The strategy aligns with the principles covered in the Load Step guide:
Step 1: Partition by Entity
Break your data into separate files by Entity (or groups of Entities):
- Large Entities with high row counts → their own Workflow Profile and file
- Smaller Entities → grouped together in shared Workflow Profiles
Step 2: Create Matching Workflow Profiles
Each file maps to a Workflow Profile. The batch file naming convention links the file to the correct profile.
Step 3: Run in Parallel
Use
ExecuteFileHarvestBatchParallel with an appropriate ParallelCount to process multiple Workflow Profiles simultaneously.Real-World Example
A large retail company with 6 million rows of data:
- Broke out the largest Entities into their own Workflows
- Grouped smaller Entities into shared Workflows
- Used
ExecuteFileHarvestBatchParallelwith a parallel count of 8 - Started at 4 threads and increased until the servers showed capacity limits
- Paid particular attention to the Database Server (the typical bottleneck)
Scheduling Automated Loads
To run batch loads on a schedule:
- Create a Task Scheduler task in OneStream
- Set a starting date and time
- Select the Data Management Sequence from the navigation tree
- In the Schedule tab, define the recurrence (daily, weekly, etc.)
You can also set up email notifications in the Data Management Sequence properties under Sequence Properties → Notifications. Define who gets notified for each event type (success, failure, warnings).