@@ -190,7 +190,8 @@ the compression ratio. This is illustrated in the following diagram.
190190
191191![ Traffic reduction] ( img/0156_traffic_reduction_use_case.png )
192192
193- > Note 1: A fallback mechanism can be used to handle the case where the new protocol is not supported by the target.
193+ > [ !NOTE]
194+ > ** 1:** A fallback mechanism can be used to handle the case where the new protocol is not supported by the target.
194195> More on this mechanism in this [ section] ( #protocol-extension-and-fallback-mechanism ) .
195196
196197#### Phase 2
@@ -274,6 +275,7 @@ service ArrowMetricsService {
274275}
275276```
276277
278+ > [ !INFORMATION]
277279> ** Unary RPC vs Stream RPC** : We use a stream-oriented protocol ** to get rid of the overhead of specifying the schema
278280> and dictionaries for each batch.** A state will be maintained receiver side to keep track of the schemas and
279281> dictionaries. The [ Arrow IPC format] ( #arrow-ipc-format ) has been designed to follow this pattern and also allows the
@@ -391,7 +393,8 @@ By storing Arrow buffers in a protobuf field of type 'bytes' we can leverage the
391393protobuf implementations (e.g. C++, Java, Rust) in order to get the most out of Arrow (relying on zero-copy ser/deser
392394framework).
393395
394- > Note: By default, ZSTD compression is enabled at the Arrow IPC level in order to benefit from the best compression
396+ > [ !NOTE]
397+ > By default, ZSTD compression is enabled at the Arrow IPC level in order to benefit from the best compression
395398> ratio regardless of the collector configuration. However, this compression can be disabled to enable it at the global
396399> gRPC level if it makes more sense for a particular configuration.
397400
@@ -450,7 +453,8 @@ To indicate not-retryable errors the server is recommended to use code INVALID_A
450453details
451454via ` error_message ` .
452455
453- > Note: [ Appendix A] ( #appendix-a---protocol-buffer-definitions ) contains the full protobuf definition.
456+ > [ !NOTE]
457+ > [ Appendix A] ( #appendix-a---protocol-buffer-definitions ) contains the full protobuf definition.
454458
455459### Mapping OTel Entities to Arrow Records
456460
@@ -489,7 +493,8 @@ same schema are grouped in a homogeneous stream. The first message sent contains
489493the schema definition and the dictionaries. The following messages will not need to define the schema anymore.
490494The dictionaries will only be sent again when their content change. The following diagram illustrates this process.
491495
492- > Note: The approach of using a single Arrow record per OTel entity, which employs list, struct, and union Arrow data
496+ > [ !NOTE]
497+ > The approach of using a single Arrow record per OTel entity, which employs list, struct, and union Arrow data
493498> types, was not adopted mainly due to the inability to sort each level of the OTel hierarchy independently. The mapping
494499> delineated in this document, on average, provides a superior compression ratio.
495500
@@ -533,7 +538,8 @@ engines.
533538- The avoidance of complex Arrow data types (like union, list of struct) optimizes compatibility with the Arrow
534539ecosystem.
535540
536- > Note: Complex attribute values could also be encoded in protobuf once the ` pdata ` library provides support for it.
541+ > [ !NOTE]
542+ > Complex attribute values could also be encoded in protobuf once the ` pdata ` library provides support for it.
537543
538544#### Spans Arrow Mapping
539545
@@ -581,7 +587,8 @@ As usual, each of these Arrow records is sorted by specific columns to optimize
581587batch of metrics containing a large number of data points sharing the same attributes and timestamp will be highly
582588compressible (multivariate time-series scenario).
583589
584- > Note: every OTLP timestamps are represented as Arrow timestamps as Epoch timestamps with nanosecond precision. This representation will
590+ > [ !NOTE]
591+ > every OTLP timestamps are represented as Arrow timestamps as Epoch timestamps with nanosecond precision. This representation will
585592> simplify the integration with the rest of the Arrow ecosystem (numerous time/date functions are supported in
586593> DataFusion for example).
587594> Note: aggregation_temporality is represented as an Arrow dictionary with a dictionary index of type int8. This OTLP
0 commit comments