Skip to main content
Tracing enables you to monitor and analyze your API Proxy message processing flow in real-time. Trace operations are performed on an API Proxy basis and are activated separately for each API Proxy. This module is particularly used for:
  • Understanding how policies work
  • Detecting performance issues
  • Debugging
  • Examining request/response flow
  • Validating transformations

API Proxy-Based Trace

You can start a separate trace session for each API Proxy

Detailed Tracking

You can track each policy’s execution step by step and view log records

Performance Analysis

You can analyze timing metrics and detect bottlenecks

Debugging

You can easily detect errors and perform root cause analysis
Trace runs per API Proxy. Clicking a log row opens the trace detail drawer on the right, with timing breakdown, the Client–Gateway–API map, and before/after policy panels. For the full walkthrough, see Step-by-Step Tracing.

Starting Trace Mode

Trace mode is started separately for each API Proxy. You can activate trace mode from the API Proxy’s own page.

Prerequisites

Before starting trace mode:
1

API Proxy Must Be Loaded

The API Proxy you want to track must be loaded into at least one Environment
2

Go to API Proxy Page

Go to the detail page of the API Proxy you want to trace
3

Make Environment Selection

Select the Environment for which trace mode will be opened from the Environments where the API Proxy is loaded
4

Click Start Button

Activate trace mode by clicking the Start button
Activating Trace Mode
With Custom Query from the filter field next to environment selection, you can specifically trace only desired data.
When trace mode is activated:
  • Log record content is expanded to allow detailed examination
  • Log records are written to MongoDB configuration database
  • Detailed log records are created for all executed policies
  • This continues to be stored until the mode is stopped or automatically closed by the platform after 5 minutes

Trace Records

After trace mode is activated, requests coming to the API Proxy are automatically tracked and detailed records are created. Trace Log Records
Log records being displayed are not automatically updated. Use the Refresh Logs button to see new records.
Each log record shown in the table belongs to the request coming from the client to this API Proxy and the response message given to that request.
Since the log record of API Call Policy is kept separately, double logs appear for the same request. The first is the before and after state of the message in the main flow, and the second is the request and response message coming out of the API Call.

Trace List

The following information is displayed for each record in the trace list:
InformationDescription
TimestampDate and time when the request arrived
MethodHTTP method (GET, POST, PUT, DELETE, etc.)
Path / EndpointRequest path and endpoint name
Status CodeResponse status code (200, 404, 500, etc.)
DurationTotal processing time (ms)
PoliciesNumber of policies executed
Correlation IDRequest-specific correlation ID
When API Call Policy is used, double log records appear for the same request:
  1. Before and after state of the message in the main flow
  2. Request and response message coming out of the API Call

Trace Operations

The following operations can be performed for each trace record:
The first control at the end of the row for the request message to be examined opens a window that displays the log records of that message.Detailed View ButtonIn the opened window, logs are divided into sections related to message flow. When the name of the section to be examined is clicked, log records related to that area are displayed. By default, the Overview section is open.Detailed View Dialog

Tracking Policy Flow

Click the Select button to view the detailed execution information Apinizer keeps for policies run on the API Proxy while step-by-step tracing is on. Selection When you click a table row or Select in the row menu, the trace detail drawer opens on the right. At the top you see timing breakdown and the Client → Gateway → API map; request-line (top row) and response-line (bottom row) policy nodes show execution order, including policies from an API Proxy Group when applicable. Nodes show a green check for success, ! for error or block, and a faded appearance with an S badge for skipped policies. Use ‹ Back / Next › or click a node to move between steps. For the full walkthrough, see Step-by-Step Tracing.
Example when all steps succeed: timing bar, map, and Client step with request/response panels.Trace drawer: successful flow with timing, map, and Client step

Policy Execution Details

Policies executed first when a request arrives:
  • Policy Name: Name of the executed policy
  • Execution Time: Policy execution time (ms)
  • Status: Success / Failure status
  • Changes: Changes made by the policy to the message (header, body, variable changes)
Examples: Authentication, rate limiting, IP control
Routing step to Backend API:
  • Selected Upstream: Selected upstream target
  • Load Balancing Decision: Load balancing algorithm decision
  • Connection Time: Connection time to backend (ms)
  • Backend Response Time: Backend response time (ms)
  • Retry/Failover: Retry or failover status
Policies executed after response comes from backend:
  • Policy Name: Name of the executed policy
  • Execution Time: Policy execution time (ms)
  • Status: Success / Failure status
  • Changes: Changes made to the response message
Examples: Response transformation, cache writing, logging
Policies executed in case of error:
  • Error Type: Error type (authentication, routing, policy, etc.)
  • Error Message: Error message
  • Handler Policies: Executed error handler policies
  • Final Response: Final response returned to client
Fault Handler only runs when an error occurs and allows you to customize the error response.

Detailed Log Records

When you select a log row, the drawer summarizes success or failure via the map and header; the lower panel shows tables and bodies for the selected node. Overview example — map, summary line, and request headers: Trace drawer: map and gateway ingress summary Below, common steps in the same drawer are summarized with the recommended screenshots (full walkthrough on Step-by-Step Tracing).
When the Client node is selected on the map, the left column shows the request from the client (HTTP Info, headers, parameters, body) and the right column shows the response sent to the client.Trace drawer: Client step request and response
Click a policy node on the map to see execution info and Before / After accordions comparing message state.Trace drawer: policy Before and After
Target (API) summary, Routing table (expanded row; see HTTP Routing), and response from target are illustrated below in order.Trace drawer: backend target summaryTrace drawer: routing tableTrace drawer: response from target

Request/Response Comparison

Trace mode shows how the message changes along the flow:

Before/After

You can compare the before and after state of the message for each policy

Transformation Analysis

You can see the effect of transformation policies

Header Changes

You can see added, modified, or deleted headers

Body Changes

You can see JSON/XML transformations and content changes

Performance Analysis

Trace mode provides detailed timing metrics to detect performance issues.

Timing Metrics

The following metrics are displayed for each trace record:
MetricDescription
Total DurationTotal entry-exit time of the request (ms)
Pre-flow DurationTotal execution time of pre-flow policies
Route DurationConnection to backend and response receiving time
Backend DurationBackend API response time (net)
Post-flow DurationTotal execution time of post-flow policies
Gateway OverheadTime added by Apinizer Gateway (Total - Backend)
If Gateway Overhead is high, policy optimization can be done. If Backend Duration is high, backend API should be optimized.

Policy Performance Analysis

To analyze policy performance:

Slowest Policies

You can identify and optimize policies taking the longest time

Policy Count

You can see the total number of policies executed, remove unnecessary policies

Average Policy Duration

You can monitor the average execution time of each policy

Policy Execution Order

You can improve performance by changing the order of policies
Optimization Recommendations:
  • Cache Policy: Use cache to reduce backend calls
  • Conditional Flow: Conditionally skip unnecessary policies
  • Script Optimization: Optimize slow operations in script policies
  • Transformation: Remove unnecessary transformations

Backend Performance Metrics

To monitor backend API performance:
MetricDescription
Connection TimeTCP connection time to backend server
SSL Handshake TimeSSL handshake time for HTTPS connection
Response TimeBackend response generation time
Total Backend TimeConnection + Response total time
Backend StatusSuccess status of backend call
Retry/Failover CountNumber of retries or failovers performed
High Connection Time indicates that the backend server is slow or has network issues. High Response Time indicates that the backend API needs to be optimized.

Use Cases

Scenario 1: Performance Issue Detection

Situation: Response times of an API Proxy are higher than expected.
1

Start Trace

Activate trace mode from the API Proxy page
2

Examine Slow Requests

Find slow requests (e.g., >1000ms) in trace records
3

Examine Policy Flow

Open the trace detail drawer (table row or Select) and identify the slowest policies
4

Detect Bottlenecks

  • Is backend API slow? → Backend should be optimized
  • Are policies slow? → Script/transformation should be optimized
  • Is database query slow? → Cache can be used
5

Perform Optimization

Fix detected issues and test again with trace

Scenario 2: Debugging

Situation: Some requests return 500 error and the cause is unknown.
1

Start Trace

Activate trace mode from the API Proxy page
2

Find Failed Requests

Find 5xx errors in trace records
3

Find Policy Where Error Occurred

Open the trace detail drawer (table row or Select) and find the policy marked with !
4

Examine Policy Details

  • Examine the message coming to the policy (Before)
  • Read the error message
  • Examine detailed log records
5

Perform Root Cause Analysis

  • Is data format wrong?
  • Is header missing?
  • Is it a script error?
  • Is backend unreachable?
6

Fix and Test

Fix the issue and test again with trace

Scenario 3: Transformation Validation

Situation: Checking if JSON to XML transformation works correctly.
1

Start Trace

Activate trace mode from the API Proxy page
2

Send Test Request

Send a sample JSON request from Test Console
3

Select Transformation Policy

Open the trace detail drawer (table row or Select) and click the transformation policy node
4

Compare Before/After

  • Before: Incoming JSON message
  • After: Converted XML message
  • Check if the transformation is correct
5

Check Message Going to Backend

Verify in Backend API log records that the outgoing message is in XML format and correct

Scenario 4: Conditional Flow Testing

Situation: Testing if conditional policies work correctly.
1

Start Trace

Activate trace mode from the API Proxy page
2

Send Requests for Different Conditions

  • Request for premium user
  • Request for normal user
  • Request for guest
3

Examine Trace for Each Request

Open the trace detail drawer (table row or Select) and see which policies ran
4

Check Condition Evaluation

  • What was the condition expression?
  • What was the evaluation result?
  • Did the correct policies run?
5

Fix Conditions if Necessary

Fix incorrect conditions and test again with trace

Best Practices

Trace Usage

Use Frequently in Development Environment

  • Trace continuously during development
  • Always test with trace when adding new policies
  • Validate API changes with trace

Be Careful in Production

  • Activate trace in production only when necessary
  • Trace automatically closes after 5 minutes
  • Consider performance impact

Use Custom Query

  • Filter with Custom Query from the filter field next to environment selection
  • Trace only relevant endpoints
  • Minimize unnecessary trace records

Use on API Proxy Basis

  • Start trace separately for each API Proxy
  • Activate trace mode from the relevant API Proxy’s page
  • Trace records are stored in MongoDB

Performance Monitoring

Create Performance Baseline:
  • Measure average response time of API under normal conditions
  • Record average execution time of each policy
  • Monitor deviations from baseline and set alarms
Regular Monitoring:
  • Run performance trace once a week
  • Perform trend analyses
  • Detect slowdowns early
Optimization Cycle:
  1. Detect bottlenecks with trace
  2. Optimize
  3. Validate improvement with trace
  4. Document results

Debugging

Repeatable Test Scenarios:
  • Prepare test scenarios before starting trace
  • Get consistent results using the same test data
  • Test edge cases
Systematic Approach:
  1. Isolate the problem (which endpoint, under which condition?)
  2. Collect detailed information with trace
  3. Perform root cause analysis
  4. Fix
  5. Validate with trace
  6. Document the process
Before/After Comparison:
  • Compare input/output messages for each policy
  • Detect unexpected changes
  • Check transformation correctness

Step-by-Step Tracing

API Proxy-based trace operations

Test Console

API test and debug console

Policy Management

Managing and configuring policies

API Traffic Log Settings

Configuring log record settings

Message Processing and Policy Application

Information about message flow and policy execution

Conditional Policy Execution

Detailed information about conditional flow