How Do I Debug Failed TOSCA TestSteps? 

Introduction

Imagine you’ve built a TOSCA automation suite with dozens or even hundreds of TestCases. You're confident they've all passed in ScratchBook. But during a full test run, a few TestSteps start failing, breaking your Continuous Integration pipeline. You stare at cryptic error messages and logs, wondering: “Where did it go wrong?”

Debugging failed TOSCA TestSteps can be daunting even for seasoned testers. But mastering it is essential for building robust, reliable automation. In this blog post, we'll walk through how to debug failed TestSteps in Tricentis TOSCA, combining practical techniques, configuration tweaks, and real-world strategies. Whether you're new to TOSCA or revisiting this during your TOSCA training, this deep dive will sharpen your troubleshooting skills.

tosca training

Why Debugging TestSteps Is Critical in TOSCA

  • Prevent flakiness and instability: Frequent failures on the same step point to instability in the test design or the system under test (SUT).

  • Save time in regression cycles: Effective debugging prevents repeated failures, reduces triage time, and stabilizes your execution lists.

  • Build confidence: When you know how to trace and fix failing steps, your automation suite becomes more maintainable, increasing trust from stakeholders.

  • Align with best practices: TOSCA best practices recommend incorporating Recovery Scenarios, RetryLevels, and CleanUp Scenarios to handle unexpected failures.

Key Concepts to Understand Before Debugging

Before jumping into debugging techniques, let’s clarify some crucial TOSCA concepts:

  1. TestStep vs TestStepValue

    • A TestStep is a higher-level action in a TestCase, composed of one or more TestStepValues.

    • When a TestStep fails, often it's due to one problematic TestStepValue.

  2. Execution Logging Level

    • TOSCA Commander lets you configure how detailed your execution results are logged.

    • Common levels:

      • TestStepValues – All: logs at the individual value level

      • TestSteps: logs aggregated at the TestStep level

    • More detailed logs help debugging.

  3. Screenshots and Documentation

    • TOSCA can capture screenshots on failed TestSteps (e.g., verification failure or control not found) if configured.

    • Generate detailed execution documentation with logs and screenshots to analyze failures efficiently.

  4. Recovery, Retry, and Clean-up Scenarios

    • These are built-in TOSCA patterns for error handling.

    • RetryLevel controls whether recovery reruns a TestStep, TestCase, or higher-level scope.

    • CleanUp Scenarios reset the environment after unrecoverable failures.

Step‑by‑Step Guide to Debug Failed TestSteps

Here’s a structured workflow to debug failing TestSteps in TOSCA.

1. Reproduce the Failure in ScratchBook

  • Run the failing TestStep alone using Run in ScratchBook.

  • Observe the log output to check why it fails.

  • Advantages:

    • Isolates the issue from other parts of the TestCase.

    • Easier to run and rerun quickly.

Example: Suppose your TestStep “Login” is failing. In ScratchBook, you run just the “Login” step, and the log shows “Button not found.” This tells you immediately something is wrong with module mapping or locators.

2. Increase Logging Detail for Troubleshooting

If the failure isn’t obvious:

  • Go to Settings → TBox → Logging.

  • Set detailed logging for TestStepValues or TestSteps.

  • Enable screenshot capture on failure and specify the screenshot directory and format.

Once detailed logging and screenshots are enabled, rerun the failing TestCase from your ExecutionList.

  • Examine the logs and screenshots for visual context.

  • Generate a documentation report if needed to review all failures in one place.

3. Analyze the Failure Pattern

Use logs and screenshots to identify a pattern:

  • Consistent or intermittent failures

    • Consistent failures point to wrong module mapping, incorrect TestStepValue, or stale control identifiers.

    • Intermittent failures indicate timing issues, synchronization problems, or environment instability.

  • Type of failure

    • Detection failure: Tosca cannot find the control.

    • Verification failure: Tosca found the control but the value or state did not match expectations.

  • Step location in flow

    • Check which TestStepValue failed.

    • In multi-value TestSteps, often only one value is problematic.

4. Use Recovery and Retry Strategies

Leverage TOSCA’s built-in error-handling:

  1. Create a Recovery Scenario

    • Identify common failure causes such as pop-ups, modal dialogs, or timeouts.

    • Build a recovery flow: check for conditions, close pop-ups, refresh, etc.

  2. Configure RetryLevel

    • Set RetryLevel to TestStep so that on failure, TOSCA applies recovery and retries the failing step.

  3. Add a CleanUp Scenario

    • CleanUp resets your environment if recovery fails and prepares for subsequent test runs.

Example:
Your “Submit Form” TestStep fails because a session timeout popped up.

  • Recovery scenario: detect timeout dialog → click “OK” → navigate back to the form.

  • RetryLevel = TestStep: Tosca retries “Submit Form” after recovery.

  • CleanUp: if recovery fails, forcibly restart the browser.

5. Use Breakpoints and Debugging (for Custom or TBox Code)

For custom TBox modules:

  • Attach Visual Studio to the TBox Agent process.

  • Set breakpoints in your custom assembly.

  • Trigger execution through Tosca Commander, hitting breakpoints to analyze variable states and exceptions.

6. Validate Module Mapping and Attributes

Failures often arise from unstable or incorrect control mapping:

  • Re-scan your modules using XScan.

  • Use stable properties: choose attributes less likely to change (Name, InnerText, AccessibleName) rather than dynamic IDs.

  • Re-synchronize TestSteps to align with updated module definitions.

7. Introduce Waits, Synchronization, or Validation Logic

Timing issues often cause intermittent failures:

  • Insert Wait TestStepValues (e.g., “Wait until Element Visible”) before interacting with a control.

  • Use If/Else logic to verify UI state before performing an action.

  • Use buffers and configuration parameters to handle dynamic data.

Example TestStep:

TestStep: Verify and Click Submit  

  - Value 1: [Button: Submit] – ActionMode = Verify → Buffer = SubmitExists  

  - Value 2: [If SubmitExists = true] → ActionMode = Execute → [Click Submit]


8. Re-Run with Clean Execution Lists

After debugging:

  1. Rebuild your ExecutionList with updated TestCases.

  2. Run the ExecutionList and monitor logs, screenshots, retry behavior, and recovery outcomes.

  3. If failures persist, mark problematic TestSteps and rerun in ScratchBook to validate fixes.

Common Debugging Scenarios & Sample Solutions

Scenario

Problem

Debugging Steps

Fix / Solution

Control Not Found

Detection failure

Run in ScratchBook → element missing

Re-map module, re-scan, use stable attributes

Verification Fails

Value mismatch

Compare screenshots → value wrong

Update expected values, refine verification logic

Intermittent Timeout

Timing issue

Review logs → control appears late

Add wait steps, conditional logic, retry settings

Unexpected Pop-up Dialog

Recovery required

Recovery scenario missing

Build recovery flow, set RetryLevel to TestStep

Custom TBox Module Error

Exception in code

Debug with breakpoints

Fix code, catch exceptions, improve logging


Best Practices for Debugging in a TOSCA Automation Course

Here are the Best Practices for Debugging in a TOSCA Automation Course

  • Test new TestSteps in ScratchBook first.

  • Enable detailed logging and screenshots early.

  • Incorporate Recovery, Retry, and CleanUp scenarios from the beginning.

  • Practice on applications with popups, dynamic UI, or performance delays.

  • Teach logging settings to understand how to tweak levels.

  • Debug custom modules using breakpoints.

  • Generate reports to analyze failure patterns.

Challenges & Pitfalls in Debugging TOSCA TestSteps

  • Excessive Logging: slows execution. Use detailed logging only during debugging.

  • Overuse of Recovery: may mask underlying design flaws.

  • Stale Modules: UI changes require module rescans.

  • Complex Custom Code: may introduce silent failures if error handling is missing.

  • Poor Synchronization: not using waits can lead to timing failures.

Real-World Example — Debugging a Failing SAP Login Flow

Scenario: Automating SAP Fiori login.
Failure: “Username field not found” intermittently during CI runs.
Debugging: Run in ScratchBook → intermittent failure observed. Enable detailed logs and screenshots → login dialog sometimes loads slowly.
Solution: Add wait step, verify field before typing, build recovery to handle dialogs, set RetryLevel = TestStep.
Result: CI runs stabilize, login failures drop to zero, logs capture recovery steps.

Summary: Key Takeaways

  • Use ScratchBook, detailed logging, and screenshots for clear insight.

  • Leverage Recovery, Retry, and CleanUp scenarios for resilience.

  • Maintain module definitions, use waits, and debug smartly to eliminate flaky failures.

Conclusion 

Debugging in TOSCA is not just about fixing errors it’s about creating a resilient automation suite. By combining detailed logging, recovery strategies, and smart design, you can turn failing TestSteps into stable, reliable workflows.

Start by applying these techniques to one failing TestStep today and watch your automation suite become more robust and efficient.

Key Takeaways

  1. Use ScratchBook and logs with screenshots for clear failure insight.

  2. Apply recovery, retry, and clean-up scenarios to build resilient tests.

  3. Keep modules up-to-date, add waits, and debug effectively to prevent flaky failures.


Comments

Popular posts from this blog