Data Tables
Built-in data storage for workflows, including lookup tables, state tracking, and deduplication.
Data Tables
Note: Compose is only available for Beta testing. See Welcome to UKG Compose > Beta Disclosure for more information.
Data Tables provide built-in data storage for your workflows, allowing you to store and manage structured data directly within Compose without needing an external database.
What Data Tables Are
Data Tables are internal storage you can use from any workflow in your Compose project. Think of them as lightweight spreadsheets you can create, query, and update from workflow logic.
Key characteristics:
- Structured tabular data (rows and columns)
- Created and managed through the Data Tables node
- Available across all workflows in your project
- Stored within Compose (counts toward your storage capacity)
- Supports standard data types: String, Number, Date, Boolean
When to Use Data Tables
Deduplication and Idempotency
Track which records you've already processed to prevent duplicate actions.
Use cases:
- Prevent duplicate notifications (track which employees received a notice)
- Avoid processing the same event twice (track punch IDs, webhook request IDs)
- Ensure idempotent workflow behavior when events may be retried
Pattern: Before processing an event, check if its ID exists in your tracking table. If it exists, skip processing. If not, process the event and insert the ID.
Lookup Tables for Business Rules
Store policy rules, decision criteria, or reference data that workflows need to consult.
Use cases:
- Wage step progression tables (job code → step → rate)
- Shift differential rules (shift type → differential percentage)
- PTO accrual policies (tenure → accrual rate)
- Department routing rules (department code → manager, approver)
- Custom business logic tables
Pattern: When workflow logic needs a policy value, query the lookup table instead of hard-coding values in the workflow.
State Tracking Across Workflow Executions
Remember workflow state between executions or across related workflows.
Use cases:
- Track approval request status (pending, approved, rejected)
- Maintain multi-step process state (onboarding checklist progress)
- Store temporary results between workflow runs
- Track retry attempts or escalation counts
Pattern: Insert a row when starting a process. Update the row as the process progresses. Query the table from other workflows to check status.
Cross-Workflow Data Sharing
Share data between workflows that need to coordinate or access common information.
Use cases:
- Multiple workflows checking the same approval status
- Shared employee preference data across workflows
- Common reference data used by multiple automation processes
Pattern: One workflow writes data to the table. Other workflows query the table to access that shared data.
Temporary Staging and Data Collection
Collect data from multiple sources or executions before processing.
Use cases:
- Aggregate timecard exceptions throughout the day, process in batch at end of day
- Collect feedback from multiple approvers before final decision
- Stage data from multiple API calls before downstream processing
Pattern: Multiple workflow executions insert rows. A scheduled workflow later processes all accumulated rows.
Audit and Compliance Records
Track workflow decisions and actions for compliance or audit purposes.
Use cases:
- Log which workflows made employment-related decisions
- Record approval chains and decision criteria
- Track when sensitive data was accessed or modified
- Maintain workflow execution metadata for compliance review
Pattern: After critical workflow actions, insert audit records with timestamp, user, action, and outcome.
Data Tables Node Operations
Table Operations
Create a table:
- Define table name
- Define columns (name and data type for each)
- Option: Reuse existing table if one with the same name already exists
List tables:
- Retrieve all tables in your project
- Filter by table name
- Limit results returned
Update a table:
- Rename an existing table
Delete a table:
- Permanently delete a table and all its data (cannot be undone)
Row Operations
Insert rows:
- Add new data to a table
- Map incoming workflow data to table columns (manually or automatically)
- Option: Optimize for bulk inserts (faster for large batches)
Get rows:
- Query rows based on conditions
- Supports: Equals, Not Equals, Greater Than, Less Than, Is Empty, Is Not Empty
- Match any condition or all conditions
- Limit results and order by column
Update rows:
- Modify existing rows that match conditions
- Map incoming data to columns
- Option: Dry run to preview changes without applying them
Upsert rows:
- Update rows if they exist, insert if they don't
- Useful for maintaining current state without worrying about whether record exists
- Map incoming data to columns
- Option: Dry run to preview operation
Delete rows:
- Remove rows that match conditions
- Option: Dry run to preview which rows would be deleted
If Row Exists / If Row Does Not Exist:
- Branch workflow logic based on whether matching rows exist
- If Row Exists: continues workflow only if matching row found
- If Row Does Not Exist: continues workflow only if no matching row found
- Useful for deduplication and conditional processing
Common Workflow Patterns
Pattern 1: Deduplication (Prevent Duplicate Processing)
- Webhook Trigger receives event (with unique event ID)
- Data Tables node: "If Row Does Not Exist"
- Check if event ID exists in tracking table
- If row does NOT exist: continue to step 3
- If row exists: workflow stops (already processed)
- Process the event (send notification, update data, etc.)
- Data Tables node: Insert row
- Store event ID and timestamp to mark as processed
Why this works: The workflow only proceeds if the event hasn't been processed before. Once processed, the event ID is recorded so future duplicate events are ignored.
Pattern 2: Lookup Table for Business Rules
- Trigger: Employee job change event
- Extract new job code from event data
- Data Tables node: Get row
- Query lookup table where job_code = new job code
- Retrieve shift_differential_percent for that job
- Use differential percentage in compensation calculation
- Update employee compensation record
Why this works: Business rules are stored in the lookup table, not hard-coded in the workflow. Update the table to change rules without modifying workflow logic.
Pattern 3: Track Approval Workflow State
Workflow 1: Create approval request
- Generate unique request ID
- Data Tables node: Insert row
- Columns: request_id, employee_id, status="pending", created_timestamp
- Send approval notification to manager
- (Workflow pauses waiting for approval response)
Workflow 2: Process approval response
- Receive approval action (approve/reject)
- Data Tables node: Update row
- Find row where request_id matches
- Set status="approved" or status="rejected", updated_timestamp=now
Workflow 3: Query approval status (from another workflow)
- Data Tables node: Get row
- Query where employee_id matches and status="pending"
- Returns all pending approvals for that employee
Why this works: Approval state persists across workflows and executions. Any workflow can query current approval status.
Pattern 4: Cross-Workflow Coordination
Workflow A: Stores employee preferences
- Employee submits preference form
- Data Tables node: Upsert row
- Columns: employee_id, notification_preference, language_preference
- Upsert ensures preference is updated if exists, inserted if new
Workflow B: Uses employee preferences
- Trigger: Need to send notification to employee
- Data Tables node: Get row
- Query where employee_id matches
- Retrieve notification_preference
- IF node: Branch based on preference
- If "email": send email
- If "SMS": send SMS
- If "app_notification": send app notification
Why this works: One workflow manages preferences. Many workflows query those preferences without each needing its own storage.
Accessing Data Tables
From the Data Tables Node
Use the Data Tables node in your workflows to:
- Create and manage tables programmatically
- Insert, query, update, and delete rows as part of workflow logic
- Branch workflow paths based on whether rows exist
See the Data Tables node in the workflow editor node selection panel under core nodes.
From the Data Tables Tab (Manual Management)
You can also view and manage data tables manually from the Compose UI:
- Navigate to the Data Tables tab in your Compose project
- Create tables from scratch or import from CSV
- View, edit, add, and delete rows directly
- Export table data as CSV
When to use manual management:
- Creating initial lookup tables or reference data
- Reviewing workflow-generated data
- Debugging workflow data issues
- Bulk importing reference data from CSV
Considerations and Limitations
Storage Limits
Data Tables count toward your Compose data storage capacity (see Packaging and Scale Guidance).
What this means:
- You have a total storage limit across all data tables in your project
- When approaching the limit, you'll see warnings
- Exceeding the limit will prevent new inserts and cause workflow errors
- Monitor your data table usage and clean up old data regularly
Project Scope
Data Tables are scoped to your Compose project.
What this means:
- Tables created in one project are NOT accessible from other projects
- All workflows in the same project can access the same tables
- If you need to share data across projects, use external storage or APIs instead
Performance Considerations
Data Tables are designed for lightweight data storage, not large-scale data warehousing.
Best practices:
- Use for thousands of rows, not millions
- Keep row sizes reasonable (don't store large blobs of data)
- Delete old rows you no longer need (especially tracking/audit data)
- For large datasets or complex queries, use external databases instead
Data Types
Data Tables support four data types:
- String - Text data
- Number - Numeric values (integers or decimals)
- Date - Date and timestamp values
- Boolean - True/false values
Choose the appropriate type when creating columns to ensure correct sorting and comparison behavior.
Data Tables vs External Databases
Use Data Tables when:
- You need lightweight reference data (lookup tables, configuration)
- You're tracking workflow state or deduplication
- Data is shared across a few workflows in the same project
- You want to avoid managing external database infrastructure
Use external databases when:
- You have large datasets (tens of thousands of rows or more)
- Data must be shared across many systems or projects
- You need complex queries, joins, or reporting
- Data must persist independently of Compose
- You need advanced database features (transactions, stored procedures, etc.)
Related Pages
Other node types:
- Webhook Triggers
- Notification Nodes
- Approval and Workflow Nodes
- API Integration Nodes
- Data Query Nodes
Conceptual pages:
Updated 8 days ago