Problem
In cloud.google.com/go/spanner, the Query span created by OpenTelemetry ends almost immediately because query() uses defer endSpan(), which fires on function return. However, the actual Spanner communication hasn't started yet at that point — all I/O happens later inside RowIterator.Next() via ExecuteStreamingSql.
This creates a broken parent-child relationship in traces:
Spanner.Query duration=~0.01ms ← parent, ends immediately (zero I/O)
└── RowIterator duration=~100ms ← child, runs long after parent ended
└── ExecuteStreamingSql duration=~100ms ← actual gRPC call
The parent span ends before its child span even starts doing work, which violates the causal relationship that OpenTelemetry spans are meant to represent.
Root Cause
In transaction.go (L706-708):
func (t *txReadOnly) query(ctx context.Context, statement Statement, options QueryOptions) (ri *RowIterator) {
ctx, _ = startSpan(ctx, "Query", t.otConfig.commonTraceStartOptions...)
defer func() { endSpan(ctx, ri.err) }() // ← ends span on function return
// ...
return streamWithTransactionCallbacks(...)
}
query() performs no I/O — it only sets up parameters and returns a RowIterator. The defer endSpan() fires immediately.
In read.go (L95):
func streamWithTransactionCallbacks(...) *RowIterator {
ctx, cancel := context.WithCancel(ctx)
ctx, _ = startSpan(ctx, "RowIterator") // child span of Query's context
return &RowIterator{...}
}
The RowIterator span is created as a child of Query, but Query has already ended by the time RowIterator.Next() is called.
In read.go (L284-290), the RowIterator span only ends when Stop() is called — which is correct, but its parent has long since ended.
The same pattern exists in Read() with startSpan("Read") + defer endSpan.
Observed Trace Data
From observed traces, all Query/RowIterator pairs showed the same pattern consistently:
| Span |
Typical Duration |
Spanner.Query |
< 0.1ms |
RowIterator |
~100ms |
ExecuteStreamingSql |
~100ms |
The Query span duration is negligible because it performs no I/O, while the RowIterator child span captures all the actual work.
Comparison with Other Languages
Java (google-cloud-spanner-java / ResumableStreamIterator):
- A single span is created at construction and remains open throughout the entire iteration lifecycle
- The span only ends in the
close() method
- Uses
try (IScope scope = tracer.withSpan(span)) to ensure child gRPC operations are properly nested
Node.js (googleapis/nodejs-spanner):
startTrace('Snapshot.run', ...) creates a single span wrapping the entire operation
- Span ends on stream
'end' or 'error' event
Go (this issue):
- 2-span split where Query ends immediately and RowIterator does all the work
Java and Node.js correctly use a single span that covers from query initiation through iteration completion. Only Go has this 2-span split with a broken parent-child relationship.
Expected Behavior
A single span should cover the full query lifecycle (from query initiation to iteration completion):
Before: Query (<0.1ms) → RowIterator (~100ms) → ExecuteStreamingSql (~100ms)
After: Query (~100ms) → ExecuteStreamingSql (~100ms)
Possible Fix
- Remove the separate
RowIterator span from streamWithTransactionCallbacks() in read.go
- Remove
defer endSpan() from query() — instead, have RowIterator.Stop() end the Query span (since r.streamd.ctx would carry the Query span context)
- On error before
RowIterator creation, end the span immediately
- Apply the same fix to
Read()
This would align with the Java/Node.js approach of a single span covering the full operation.
Environment
cloud.google.com/go/spanner v1.88.0
- Go 1.24
Related Issues
Problem
In
cloud.google.com/go/spanner, theQueryspan created by OpenTelemetry ends almost immediately becausequery()usesdefer endSpan(), which fires on function return. However, the actual Spanner communication hasn't started yet at that point — all I/O happens later insideRowIterator.Next()viaExecuteStreamingSql.This creates a broken parent-child relationship in traces:
The parent span ends before its child span even starts doing work, which violates the causal relationship that OpenTelemetry spans are meant to represent.
Root Cause
In
transaction.go(L706-708):query()performs no I/O — it only sets up parameters and returns aRowIterator. Thedefer endSpan()fires immediately.In
read.go(L95):The
RowIteratorspan is created as a child ofQuery, butQueryhas already ended by the timeRowIterator.Next()is called.In
read.go(L284-290), theRowIteratorspan only ends whenStop()is called — which is correct, but its parent has long since ended.The same pattern exists in
Read()withstartSpan("Read")+defer endSpan.Observed Trace Data
From observed traces, all Query/RowIterator pairs showed the same pattern consistently:
Spanner.QueryRowIteratorExecuteStreamingSqlThe Query span duration is negligible because it performs no I/O, while the RowIterator child span captures all the actual work.
Comparison with Other Languages
Java (
google-cloud-spanner-java/ResumableStreamIterator):close()methodtry (IScope scope = tracer.withSpan(span))to ensure child gRPC operations are properly nestedNode.js (
googleapis/nodejs-spanner):startTrace('Snapshot.run', ...)creates a single span wrapping the entire operation'end'or'error'eventGo (this issue):
Java and Node.js correctly use a single span that covers from query initiation through iteration completion. Only Go has this 2-span split with a broken parent-child relationship.
Expected Behavior
A single span should cover the full query lifecycle (from query initiation to iteration completion):
Possible Fix
RowIteratorspan fromstreamWithTransactionCallbacks()inread.godefer endSpan()fromquery()— instead, haveRowIterator.Stop()end theQueryspan (sincer.streamd.ctxwould carry the Query span context)RowIteratorcreation, end the span immediatelyRead()This would align with the Java/Node.js approach of a single span covering the full operation.
Environment
cloud.google.com/go/spannerv1.88.0Related Issues
QueryandRowIteratorspans as problematic). Merging them into one span would also reduce span count.iterator.Donepropagated to span viaStop(), same structural root cause.