Data Pinning

Test workflows with saved data to avoid repeated calls to external systems.

Data Pinning

Note: Compose is only available for Beta testing. See Welcome to UKG Compose > Beta Disclosure for more information.

Data pinning allows you to save test data from a workflow node and reuse it in future executions instead of fetching fresh data. This saves time during development and protects external systems from repeated test calls.

What Data Pinning Is

Data pinning means saving the output data of a node and using that saved data in future workflow executions.

Key characteristics:

  • Available during workflow development (not in production executions)
  • Saves node output data to reuse across multiple test runs
  • Prevents repeated calls to external systems or databases
  • Works for nodes with a single main output
  • You can edit pinned data to test different scenarios

Think of it as: A way to "freeze" test data so your workflow uses the same data every time you test, without re-querying external systems.

Why Data Pinning Matters

Save Time During Development

When building a workflow with multiple nodes, you don't want to re-run the entire workflow from the beginning every time you test a change.

Without data pinning:

  1. Trigger workflow (waits for event or webhook).
  2. Query external API (takes 2-3 seconds).
  3. Transform data.
  4. Test the transformation logic you're working on.

With data pinning:

  1. Pin the API query result once
  2. Test transformation logic repeatedly using the pinned data (instant)

Result: Faster iteration when building and debugging workflows.

Avoid Consuming Resource Limits

External systems often have rate limits or usage quotas. Repeatedly calling APIs during testing can:

  • Consume API rate limits
  • Trigger usage charges
  • Slow down or block your testing

Example: If an external API allows 100 calls per hour and you're testing a workflow that calls the API, you could exhaust your limit quickly. Pin the API response once, then test your workflow logic without making additional API calls.

Ensure Consistent Test Data

When testing workflow logic, data consistency matters. If you're testing an IF node condition and the data changes between test runs, you can't be sure your logic works correctly.

Without data pinning:

  • Each test run queries fresh data
  • Data might change between runs
  • Hard to verify that workflow logic works correctly

With data pinning:

  • Same data every test run
  • Easier to verify that conditional logic works as expected
  • Easier to test edge cases by editing the pinned data

Test Without Triggering Real Events

Some triggers require external events (webhooks, scheduled triggers, form submissions). Data pinning lets you test downstream workflow logic without waiting for or simulating those events.

Pattern:

  1. Trigger the workflow once to capture real event data.
  2. Pin the trigger data.
  3. Test the rest of the workflow repeatedly using the pinned trigger data.

When to Use Data Pinning

Developing Multi-Step Workflows

You're building a workflow with several nodes and want to test the later nodes without re-running the earlier nodes.

Pattern:

  1. Build the first few nodes (trigger, API query, data transformation).
  2. Execute them to get real data.
  3. Pin the output of the last node you've completed.
  4. Build and test the next nodes using the pinned data.
  5. Once satisfied, unpin and test the full workflow end-to-end.

Testing Conditional Logic

You're building IF nodes, Switch nodes, or conditional branches and want to test different scenarios.

Pattern:

  1. Pin data that represents a specific scenario.
  2. Test the conditional logic.
  3. Edit the pinned data to simulate a different scenario.
  4. Test the other branch of the conditional logic.
  5. Repeat until all branches are tested.

Working with Slow or Expensive External Systems

You're querying a slow API, large database, or external system with usage costs.

Pattern:

  1. Query the external system once.
  2. Pin the result.
  3. Build and test the rest of your workflow using the pinned data.
  4. Unpin and run a full end-to-end test before publishing.

Testing Edge Cases

You want to test how your workflow handles unusual or rare data.

Pattern:

  1. Pin a typical data result.
  2. Edit the pinned data to simulate edge cases (null values, empty arrays, unexpected formats).
  3. Test how your workflow handles these cases.
  4. Add validation or error handling as needed.

How to Pin Data

Pin Node Output Data

  1. Execute the workflow (or execute a single node by selecting "Execute Step").
  2. Open the node whose output you want to pin.
  3. In the node's OUTPUT panel, you'll see the data returned by the node.
  4. At the top of the output panel, select the Pin icon or button.
  5. The data is now pinned — future executions will use this saved data instead of re-executing the node.

A banner appears at the top of the node output panel indicating that the data is pinned.

Unpin Data

When you're ready to fetch fresh data again:

  1. Open the node with pinned data.
  2. In the OUTPUT panel, you'll see a banner indicating pinned data.
  3. Select Unpin in the banner.

The pinned data is removed, and the next execution will fetch fresh data.

Edit Pinned Data

You can modify pinned data to test different scenarios:

  1. Open the node with pinned data.
  2. In the OUTPUT panel, switch to JSON view.
  3. Select Edit (pencil icon or edit button).
  4. Modify the JSON data directly.
  5. Select Save.

The edited data is now pinned and will be used in future test executions.

Use this to:

  • Test edge cases (null values, empty arrays, unusual formats)
  • Simulate different approval responses
  • Test different employee types or data scenarios
  • Verify error handling logic

Common Workflow Patterns

Pattern 1: Pin Trigger Data for Downstream Testing

Scenario: You're building a webhook workflow. You don't want to repeatedly trigger the webhook while testing downstream logic.

Steps:

  1. Trigger the workflow once by sending a webhook request.
  2. Open the webhook trigger node and pin the output data.
  3. Build the rest of your workflow (IF nodes, API calls, notifications).
  4. Test the downstream nodes repeatedly using the pinned webhook data.
  5. When satisfied, unpin the data and test the full workflow end-to-end.

Why this works: You capture real webhook data once, then test your workflow logic without needing to send new webhook requests for every test.

Pattern 2: Pin API Query Results

Scenario: Your workflow queries an external API that's slow or has rate limits. You want to test data transformations without repeated API calls.

Steps:

  1. Add an HTTP Request node that queries the external API.
  2. Execute the node to fetch data.
  3. Pin the HTTP Request node output.
  4. Add transformation nodes (Code, Edit Fields, etc.) to process the API data.
  5. Test transformations repeatedly using the pinned API response.
  6. Unpin and test end-to-end before publishing.

Why this works: You avoid consuming API rate limits and speed up testing by reusing the API response.

Pattern 3: Test Multiple Conditional Branches

Scenario: You have an IF node that routes workflow logic based on employee type. You want to test both branches.

Steps:

  1. Execute the workflow with an employee of type "full-time".
  2. Pin the data before the IF node.
  3. Test the "full-time" branch.
  4. Edit the pinned data to change employee type to "part-time".
  5. Test the "part-time" branch.
  6. Repeat for other employee types.
  7. Unpin and test with real data before publishing.

Why this works: You test all conditional branches without needing to find or create employees of each type in your test environment.

Pattern 4: Copy Data from Previous Executions

Scenario: A workflow failed in production. You want to test a fix using the exact data that caused the failure.

Steps:

  1. Open the Executions tab and find the failed execution.
  2. Open the execution details.
  3. Navigate to the node whose data you want to copy.
  4. Switch to JSON view.
  5. Copy the JSON output (select and copy, or use the "Copy" button).
  6. Return to your workflow editor.
  7. Execute the node you want to pin (to load the output panel).
  8. Switch to JSON view and select Edit.
  9. Paste the copied JSON data.
  10. Select Save — the data is now pinned.
  11. Test your fix using this pinned data.

Why this works: You reproduce the exact failure scenario using real production data, then verify your fix works before republishing.

Data Pinning Best Practices

Pin Strategically

Don't pin every node. Pin the nodes that:

  • Call external systems (APIs, databases).
  • Require events or user input (webhooks, forms, scheduled triggers).
  • Take a long time to execute.
  • Cost money or consume quotas.

Leave transformation and logic nodes unpinned so you can see their real behavior during testing.

Test End-to-End Before Publishing

Data pinning is a development tool. Before publishing a workflow:

  1. Unpin all nodes.
  2. Run a full end-to-end test with real data.
  3. Verify the workflow behaves correctly.
  4. Publish.

Why: Pinned data might not reflect real-world data accurately. Always validate with real data before going to production.

Use Pinning for Edge Case Testing

After your workflow works with typical data, use pinning to test edge cases:

  • Null or empty values.
  • Unusually large datasets.
  • Data format variations.
  • Error responses from APIs.

Edit pinned data to simulate these scenarios and add validation or error handling as needed.

Document What You Pinned

If you're collaborating with others, add a note to your workflow (or in Code node comments) explaining which nodes have pinned data and why.

Example:
"Webhook trigger data is pinned with a sample employee termination event. Unpin before publishing."

This prevents confusion when someone else works on the workflow.

Don't Publish with Pinned Data

Published workflows should not have pinned data. Pinned data prevents the workflow from processing real events.

Before publishing:

  1. Check all nodes for pinned data indicators.
  2. Unpin all nodes.
  3. Test the workflow end-to-end.
  4. Publish.

Data Mocking (Creating Test Data)

If you don't have real data to pin yet, you can create mock test data and then pin it:

Use the Code Node to Generate Mock Data

Add a Code node and return a test dataset:

return [
  {
    json: {
      employee_id: "12345",
      first_name: "Jane",
      last_name: "Doe",
      department: "HR",
      job_code: "ADMIN001",
      pto_balance: 80
    }
  },
  {
    json: {
      employee_id: "67890",
      first_name: "John",
      last_name: "Smith",
      department: "IT",
      job_code: "TECH002",
      pto_balance: 120
    }
  }
];

Execute the Code node, then pin the output. Now you can test downstream nodes with this mock data.

Use the Edit Fields (Set) Node

Add an Edit Fields node and manually define test data fields:

  • Select Add Field
  • Choose field type (String, Number, Date, Boolean)
  • Enter field name and value

Execute the node and pin the output.

Combine Mocking with Pinning

Pattern:

  1. Create mock data using Code or Edit Fields node.
  2. Pin the mock data.
  3. Build and test downstream workflow logic.
  4. Replace the mock node with a real trigger or data query.
  5. Test end-to-end with real data.
  6. Publish.

Limitations

Not Available in Production

Data pinning only works during workflow development. Once a workflow is published and running in production, nodes execute normally and cannot use pinned data.

Only for Nodes with Single Main Output

You can only pin data for nodes that have a single main output. Nodes with multiple outputs (like some IF or Switch nodes) cannot be pinned.

Pinned Data is Workflow-Specific

Pinned data is saved per workflow. If you copy a workflow, pinned data is not copied. You'll need to re-pin data in the new workflow.

Related Pages

Workflow building:

Testing and operations:

Learning: