Data Ingestion

The Import Step

The Import step is the entry point for external data. The user clicks the Import icon on the Workflow task bar, selects a file (or triggers a connector), and the Stage Engine takes over — parsing the raw data into a clean tabular format, recording the Source ID, and applying Transformation Rules. This guide covers the full Import lifecycle.

What the User Does

  1. Click the Import Workflow Origin located under the active month in the Workflow
  2. Click the Import icon in the task bar
  3. The system prompts the user to either browse for a file on disk or initiate the Data Connector (depending on the Data Source configuration)
  4. Select the Load Method (see below)
  5. The Stage Engine parses the file and applies Transformation Rules
Once import completes successfully, the Import task changes from blue to green.

Load Methods

When clicking Import (Load and Transform), a dialog appears with four Load Method options:
Load MethodBehaviorWhen to Use
ReplaceClears all data for the previous file that matches the specific Source ID and replaces it with the new file's data. The user must complete all remaining Workflow tasks and reload to the Cube.Most common — re-importing a corrected file for a specific source
Replace (All Time)Replaces all Workflow Units in the selected Workflow View. Forces a replacement of all time values in a multi-period Workflow view.When the file contains data across multiple periods and you want to replace everything
Replace Background (All Time, All Source IDs)Replaces all Workflow Units and all Source IDs in a background thread while the new file parse or connector execution is running. The delete happens concurrently with the parse.Large files where you want faster processing and are replacing everything
MergeAdds new data alongside existing staged data without clearing previous records. Values for the same intersections are accumulated (added together).When loading supplemental data from a second source into the same Workflow Unit
⚠️Warning
Replace Background deletes ALL Source IDs. If your Workflow uses multiple Source IDs for partial replacement during a load (e.g., one source for GL data, another for allocations), you cannot use this method — it will wipe data from other Source IDs.

How the Stage Engine Parses Data

When a file is imported, the Stage Engine:
  1. Reads the raw file using the Data Source configuration (delimiter, column positions, etc.)
  2. Parses each record into a clean tabular row containing the Amount, the Source ID, and each dimension's source value
  3. Applies Transformation Rules to map source values to Cube dimension members
  4. Stores the result in the Stage tables — both the original source values and the transformed target values
The Stage tables are partitioned by a GUID that the system generates for each Workflow. This partitioning means different Workflows have isolated staging areas. Each partition functions as a "bucket" — the Stage Engine always works on the entire bucket (inserts, updates, or deletes the whole thing), which is why partitioning matters for performance.
ℹ️Info
The Stage Engine does not know your dimension structure at parse time. It only knows two things for certain: the Scenario and the Time. Everything else (Entity, Account, etc.) is just raw text until Transformation Rules map it to real Cube members.

Right-Click Options

After Import completes, right-clicking on the Workflow channel reveals several inspection tools:

View Source Document

Opens the original source file that was imported into the application. Useful for verifying that the correct file was loaded.

View Processing Log

Opens a processing log with information on when and how the source file was imported — timestamps, row counts, and any parsing warnings.

View Transformation Rules

Displays all mapping rules for the specific intersection. This is the fastest way to see which rules were applied to transform source values into target values.

Drill Back

Available only when data was loaded using a Connector Data Source. Drill Back connects to the original source system and shows detailed records — documents, PDFs, or web pages — within the application.
💡Tip
Drill Back is a powerful audit feature, but it only works with Connector Data Sources. If you load from a flat file, Drill Back is not available. Plan your Data Source type accordingly if source-system traceability is a requirement.

Export

Exports the staged data to a file. Useful for offline analysis or for sharing transformation results with stakeholders.

Import Status and Troubleshooting

SymptomLikely Cause
Import icon stays blue after clickingThe Data Source is not assigned to the Workflow Profile — check Integration Settings
Import turns redParsing error — check the Processing Log for details (wrong delimiter, missing columns, file format mismatch)
Row count is lower than expectedCheck if the file has header rows that need to be skipped, or if records are being filtered by Static Values on the Data Source
Amount values are wrongCheck the Data Structure Type (Tabular vs. Matrix) and verify column positions for Fixed files

Global POV Constraints

Two Application Properties affect what data can be imported:
  • Enforce Global POV — When enabled, data loading is restricted to the Global POV. Users cannot import data outside the enforced time range.
  • Allow Loads Before/After Workflow View Year — Separate settings that control whether imports can target periods before or after the current Workflow year. When restricted, special icons appear on the Import task bar.