Administration and Operations

Operational guidance, troubleshooting playbooks, and workflow lifecycle management.

Administration and Operations

Note: Compose is only available for Beta testing. See Welcome to UKG Compose > Beta Disclosure for more information.

This page explains how to operate UKG Compose after initial setup, focusing on administration and support practices for managing workflows in production-like environments. This page uses the same terminology you see in the editor (workflow, execution, Pending, Failed, Execution History, Error Trigger, etc.).

What admins and operators are responsible for

In UKG Compose, administration and operations typically include:

  • Confirming prerequisites are in place
  • Validating that the right users have the right level of access
  • Reviewing workflow executions
  • Troubleshooting workflow failures or unexpected results
  • Helping builders activate workflows safely
  • Guiding how workflows are updated and supported over time

This page is not meant to replace node-specific documentation. Instead, it explains how to manage Compose as an operational workflow tool.

Workflow lifecycle management

Draft and active workflows

Most workflows move through a simple lifecycle:

  1. A builder creates or updates the workflow in draft form.
  2. The workflow is tested in the editor.
  3. The workflow is activated.
  4. Executions are monitored and reviewed.
  5. Changes are made when issues, new requirements, or improvements are identified.

Admins and advanced builders should encourage teams to fully test workflows before activating them.

Making changes safely

When updating an existing workflow:

  • Make one logical change at a time
  • Test the updated workflow before activating it
  • Review the next execution after the change
  • Be careful when changing trigger behavior, branch logic, or downstream actions
  • Confirm prerequisites and permissions still match the intended design

This is especially important for workflows that:

  • Send notifications
  • Update data
  • Wait for approvals
  • React to UKG Pro events
  • Call external systems

Using Execution History

Execution History is one of the most important operational tools in UKG Compose (see Build Workflows in UKG Compose). Each execution represents one run of a workflow and provides a record of what happened during that run.

Admins and advanced builders should use Execution History to:

  • Review whether a workflow started
  • See which path the workflow followed
  • Inspect which nodes executed
  • Review the data processed by each node
  • Identify which node returned an error
  • Troubleshoot failed, running, or pending workflow behavior

Execution History is often the fastest way to determine whether a problem started at the trigger (see Webhook Triggers), in branch logic, in a data transformation step, or in a downstream action node (see Connectors and Connections).

The Executions tab

The Executions tab in the Workflow Editor provides access to the complete execution history for a workflow. This tab contains a stored record of each workflow run, including:

  • Timestamps - When the workflow started and completed
  • Status - Success, Failed, Running, or Pending
  • Execution flow - Visual representation of which nodes ran and which path was followed
  • Node data - Input and output data for each executed node
  • Error details - Error messages and stack traces when failures occur

Use the Executions tab to monitor, debug, audit, and re-run past workflow executions.

Filtering executions

Use the filter controls in the Executions tab to narrow the list of workflow runs:

  • Filter by status - Show only Success, Failed, Running, or Pending executions
  • Filter by date range - Review executions within a specific time period
  • Search by execution ID - Locate a specific workflow run directly

Filtering by status is especially useful when troubleshooting recent failures or reviewing workflows that are waiting for user responses (Pending status).

Custom execution data

Workflows can include customizable metadata in execution records to support operational tracking and troubleshooting. Custom execution data can include:

  • Business identifiers (employee ID, request ID, transaction ID)
  • Correlation or tracking values
  • Custom metadata fields for audit or reporting purposes

Use custom execution data to:

  • Track business context across workflow runs
  • Correlate related executions
  • Support operational reporting and compliance review
  • Simplify troubleshooting by adding meaningful identifiers to execution records

What to review in an execution

When reviewing an execution, start with:

  • The overall execution status
  • The trigger node result
  • The path taken through any IF or Switch nodes
  • The first node that returned an error or unexpected output
  • The final action taken or not taken

Then inspect the node-by-node flow to understand:

  • What input each node received
  • What output each node produced
  • Whether the workflow processed the expected values
  • Where a response, update, or branch decision changed the final outcome

Execution History filters and actions

Use filters in Execution History to narrow the list of workflow runs you need to review.

Filtering by status is especially important.

Pay close attention to:

  • Pending executions, which may indicate workflows waiting on approval responses, delayed continuation, or another expected workflow event
  • Failed executions, which should be reviewed to identify the node and error that stopped the workflow
  • Running executions, which may indicate active processing or a long-running path
  • Success executions, which are useful when comparing a failed run to a known-good run

Reviewing the execution flow

When you open an execution, review the workflow flow visually to see:

  • Which nodes ran
  • Which path the workflow followed
  • Which node stopped the run
  • Whether the node outputs match what the workflow logic expected

This is especially useful for:

  • Approval-oriented workflows
  • Event-driven workflows
  • Webhook-triggered flows
  • Workflows that route to different outcomes based on conditions

Reviewing data processed by nodes

Execution History should also be used to inspect the data processed by nodes.

Review:

  • The input received by a node
  • The output produced by that node
  • Whether values were missing, empty, or differently formatted than expected
  • Whether a downstream node depended on a value that was never produced

This is often the clearest way to explain why an IF or Switch node followed an unexpected path.

Reviewing error messages

When a workflow fails:

  • Identify the exact node where the failure occurred
  • Review the error message returned on that node
  • Confirm whether the issue is a configuration problem, missing prerequisite, invalid data value, or downstream service error

For most workflow failures, the path to troubleshooting should begin with Execution History before any other deeper investigation.

Reviewing sub-workflows and related executions

Some Compose solutions may split work across more than one workflow. In those cases, admins and advanced builders may need to review multiple related executions together rather than relying on a single execution record.

Use this approach:

  • Start with the parent workflow execution
  • Confirm where the downstream workflow or follow-up process was triggered
  • Capture any shared identifiers, such as transaction ID, correlation ID, job ID, request ID, or person ID
  • Locate the related downstream execution
  • Compare statuses across both executions
  • Inspect the node outputs and error messages in each run

This is especially useful when:

  • A webhook-triggered workflow acknowledges quickly and passes heavier processing downstream
  • An approval flow pauses and resumes later
  • One workflow triggers a second workflow to complete a business action
  • The parent execution succeeds, but the expected business outcome still does not occur

The Evaluations tab

The Evaluations tab enables testing AI workflows by running test datasets through your workflow. This feature is designed to support workflows that use AI nodes or AI-driven decision logic.

Evaluations work by:

  • Running a test dataset through your workflow
  • Comparing actual outputs against expected results
  • Providing performance metrics and comparison data

Use Evaluations to:

  • Test your workflow over a range of inputs to see how it performs
  • Update workflows without accidentally breaking things elsewhere
  • Compare performance between models or prompts
  • Validate AI-driven workflow behavior before activation

When to use Evaluations:

  • Workflows that include AI-based nodes or decision logic
  • Workflows where output quality matters across multiple test cases
  • Workflows that need validation against expected outcomes before production use

When Evaluations are not needed:

  • Simple scheduled or event-driven workflows without AI components
  • Workflows that can be validated through manual execution testing
  • Non-AI workflows where execution history provides sufficient validation

Evaluations are most useful for advanced AI workflow scenarios. For standard workflow testing, manual execution and Execution History review are typically sufficient.

Version History

Version History should be part of normal operational review, especially when a workflow that previously worked starts failing or behaving differently.

Use Version History to:

  • Review when a workflow changed
  • Understand whether an issue began after a recent edit
  • Compare workflow behavior before and after a change
  • Support safer updates when correcting production-like issues

Admins and advanced builders should check Version History before making another change to a workflow that has already begun failing. This reduces the risk of layering a second issue onto the first one.

When to use Version History

Use Version History when:

  • A workflow worked previously and now fails
  • A builder is unsure which change introduced a problem
  • You need to review recent edits before reactivating the workflow
  • Support teams need context before helping diagnose a new issue

Editor operations for testing and debugging

The workflow editor includes testing and debugging operations that are useful during both initial build and later support.

These operations should be part of normal admin and builder practice:

  • Pinning data
  • Setting or changing test values
  • Testing a single node
  • Running the full workflow

Pinning data

Pinning data helps keep a known set of node output available while configuring downstream steps.

Use pinned data when:

  • You want stable sample data while building or troubleshooting
  • You need to configure a later node without re-running the entire flow
  • You are validating expressions or branches against a known input

Pinned data can make troubleshooting faster because it lets users focus on one part of the workflow at a time.

Setting and changing test values

Test values help validate workflow logic before relying on live production-like input.

Use test values when:

  • Validating branch logic
  • Testing expressions
  • Simulating different workflow paths
  • Checking how downstream nodes behave with expected and edge-case inputs

This is especially useful when the real trigger event is difficult to reproduce on demand.

Testing a node

Use Execute Step when you want to test one node in isolation.

This is best when:

  • Validating one node's configuration
  • Testing an expression or transformation
  • Confirming that a node no longer returns an error
  • Debugging a specific point in the flow

Node-level testing is useful, but it does not guarantee the entire workflow will behave as expected from end to end.

Running a workflow

Use Execute Workflow when you want to validate the full workflow path.

This is best when:

  • Confirming the trigger, logic, and action flow together
  • Validating which IF or Switch branch is taken
  • Testing multiple downstream steps in sequence
  • Confirming the end-to-end outcome of the workflow

Execute Step vs Execute Workflow

Use Execute Step when:

  • You are debugging one node
  • You need to verify a transformation or expression
  • You want to isolate one problem area

Use Execute Workflow when:

  • You need to validate the full execution path
  • You want to confirm branching behavior
  • You want to review all downstream outcomes together
  • You need to confirm the final business result of the workflow

Troubleshooting playbooks

The following playbooks are intended to give admins and advanced builders a consistent way to diagnose common issues.

Scheduled workflow did not run

Start by checking Execution History, then check:

  • Whether the workflow is Active
  • Whether the Schedule Trigger is configured correctly
  • Whether the expected time window has already passed
  • Whether the most recent execution shows a trigger event
  • Whether the workflow was recently edited and left inactive

If the workflow still does not run, manually execute the workflow in the editor to confirm the downstream logic works independently of the schedule.

Webhook trigger did not fire

Start by checking Execution History, then check (see Webhook Triggers):

  • Whether the workflow is Active
  • Whether the correct webhook URL was used
  • Whether the correct HTTP method was used
  • Whether the request path matches the configured trigger
  • Whether any required authentication, token, or header was missing
  • Whether the source system received a response or error

If the test path works but the production path does not, confirm the request was sent to the correct active endpoint and that the workflow was activated after configuration.

HCM event trigger did not receive an event

Start by checking Execution History, then check (see Webhook Triggers and Access and Setup):

  • Whether the required UKG Webhooks prerequisite is configured
  • Whether the correct event is subscribed or configured for the intended scenario
  • Whether the workflow is active
  • Whether the event actually occurred in the source system
  • Whether any trigger filtering or expected payload condition prevented downstream processing

When troubleshooting event-driven workflows, start by confirming the event occurred, then confirm the workflow received it, then review the execution to see how it was processed.

Workflow followed the wrong branch

Start by checking Execution History, then check:

  • The actual values available at the branching node
  • The expression or condition used in the IF or Switch node
  • Whether earlier nodes transformed the data in an unexpected way
  • Whether the value type matches the logic being applied

A workflow often appears to take the wrong path when the input value is missing, differently formatted, or not what the builder expected.

Notification was not sent or did not reach the user

Start by checking Execution History, then check (see Notification Nodes):

  • Whether the notification node executed successfully
  • Whether the intended channel and recipient values were populated correctly
  • Whether the workflow paused waiting for a response
  • Whether the wrong notification node was used for the intended pattern
  • Whether a downstream node assumed a response that never arrived

For approval-oriented notifications, also check whether the workflow is waiting for a user response and whether the design expects a single response or multiple responses.

Approval workflow paused and did not resume

Start by checking Execution History, especially Pending runs, then check (see Approval and Workflow Nodes):

  • Whether the workflow is intentionally waiting for a response
  • Whether the correct user or users received the approval request
  • Whether the response pattern matches the workflow design
  • Whether a timeout or fallback path was expected but not configured
  • Whether the next node depends on response data that was never returned

Approval and wait patterns should always include a clear expectation for how the workflow continues.

API update failed

Start by checking Execution History, then check (see API Integration Nodes):

  • Whether the required identifiers were provided
  • Whether the workflow passed the expected field values
  • Whether permissions and prerequisites were satisfied
  • Whether the input data format matches the expected update pattern
  • Whether the failure is visible in the node output or execution error details

When troubleshooting update-oriented nodes, verify the input data first before assuming the target system rejected the operation.

Data query returned no results or unexpected results

Start by checking Execution History, then check (see Data Query Nodes):

  • Whether the lookup key or identifier was valid
  • Whether the workflow queried the correct data set or record type
  • Whether the query node returned an empty result by design
  • Whether the workflow assumed a real-time value when the underlying data source may be cached
  • Whether downstream nodes correctly handled an empty or partial response

For query workflows, it is important to treat no results as a real design case and not only as an exception.

Error handling patterns

Compose builders will see n8n terminology in the editor, so public guidance should use the same terms.

Error handling

A workflow should be designed to expect that some steps may fail.

Common error-handling patterns include:

  • Checking outputs before proceeding to the next step
  • Routing failures through an IF or Switch branch
  • Separating standard processing from exception handling
  • Logging, notifying, or escalating when a required step fails

Error Trigger

Use an Error Trigger pattern when workflows need a defined path for handling failures.

This is useful when the desired response to a failure is not just to stop the workflow, but to:

  • Notify an admin or support team
  • Record a failure for follow-up
  • Start a recovery workflow
  • Route the failure into a manual review process

Retries and safe re-runs

Retries and re-runs should be used deliberately.

Before retrying or re-running a workflow, ask:

  • Did the workflow fail before any side effect occurred?
  • Will retrying create a duplicate notification, update, or approval action?
  • Is the downstream system safe to call again?
  • Should the workflow continue from a later point instead of starting over?

The right retry strategy depends on the node and the business outcome the workflow is trying to produce.

Timeouts and wait states

Some workflows intentionally pause, especially approval-oriented flows.

When a workflow includes a wait pattern:

  • Define what response is expected
  • Define what should happen if the response never comes
  • Define whether escalation, timeout handling, or fallback routing is needed

Wait states should be treated as part of the workflow design, not as an unexpected stall.

Operational best practices

The following practices help keep Compose workflows supportable:

  • start with a small working flow before expanding it
  • validate trigger data before taking action
  • keep branch logic readable and explicit
  • separate standard paths from exception paths
  • use Execution History after every meaningful change
  • check Version History when a workflow changes behavior unexpectedly
  • confirm permissions and prerequisites before activating a workflow
  • be deliberate when a workflow sends notifications, updates data, or waits for approvals
  • design for missing data, empty results, and delayed responses

These practices improve both reliability and supportability.

Common setup and support checks

Before escalating an issue, verify:

  • The workflow is active
  • The user has the right HCM security access
  • The tenant is using the latest supported UKG AuthN
  • The UKG Webhooks prerequisite is configured when using the HCM Webhooks Trigger
  • The expected node configuration is complete
  • The execution history reflects the current version of the workflow
  • The issue occurs consistently and is not tied only to test data

These checks solve many issues without requiring deeper investigation.

Feature areas admins should understand

Admins and support-oriented builders should be familiar with these feature areas:

Admins do not need to become deep node experts, but they should understand how these features affect lifecycle management and troubleshooting.

Related pages

Setup and governance:

Building workflows:

Nodes and connectors: