springboot application 을 k8s에 배포할때
otel-collector sidecar를 통해 jaeger로 전달되도록 하자.
openshift에 service mesh operator로 kiali, jaeger, prometheus, grafana가 함께 구성된 환경이다.
Redhat Build of OpenTelemetry operator를 install하고

OpenTelemetry Collector 를 생성하자
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
labels:
app.kubernetes.io/instance: cne-service
app.kubernetes.io/managed-by: opentelemetry-operator
name: otel
namespace: default
spec:
observability:
metrics: {}
config: |
receivers:
otlp:
protocols:
grpc:
processors:
exporters:
logging:
loglevel: info
otlp:
endpoint: jaeger-collector-headless.default.svc.cluster.local:14250
tls:
insecure: false
ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt
# prometheus: #8888은 이미 메트릭으로 지정되어있어 중복으로 인한 충돌발생
# endpoint: 0.0.0.0:8888
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging,otlp]
# metrics:
# receivers: [otlp]
# processors: []
# exporters: [prometheus]
mode: sidecar
resources:
limits:
cpu: 400m
memory: 1Gi
requests:
cpu: 10m
memory: 128Mi
managementState: managed
upgradeStrategy: automatic
volumeMounts:
- mountPath: /etc/pki/ca-trust/source/service-ca
name: cabundle-volume
ingress:
route: {}
volumes:
- configMap:
name: jaeger-service-ca
name: cabundle-volume
targetAllocator:
prometheusCR:
scrapeInterval: 30s
resources: {}
replicas: 1
updateStrategy: {}
podDisruptionBudget:
maxUnavailable: 1
OpenTelemetry Instrumentation도 생성한다.
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: instrumentation-boot-otlp-app
namespace: default
labels:
app.kubernetes.io/managed-by: opentelemetry-operator
spec:
exporter:
endpoint: 'http://localhost:4317'
java:
env:
- name: OTEL_SERVICE_NAME
value: boot-otlp-app.default
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest'
resources:
limits:
cpu: 500m
memory: 64Mi
requests:
cpu: 50m
memory: 64Mi
sampler:
argument: '1'
type: parentbased_always_on
go:
image: >-
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.1.0
resourceRequirements:
limits:
cpu: 500m
memory: 32Mi
requests:
cpu: 50m
memory: 32Mi
nodejs:
image: >-
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-nodejs:0.44.0
resourceRequirements:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 50m
memory: 128Mi
resource: {}
apacheHttpd:
configPath: /usr/local/apache2/conf
image: >-
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.3
resourceRequirements:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 1m
memory: 128Mi
version: '2.4'
propagators:
- tracecontext
- b3multi
- jaeger
dotnet:
image: >-
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-dotnet:1.1.0
resourceRequirements:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 50m
memory: 128Mi
nginx:
configFile: /etc/nginx/nginx.conf
image: >-
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-apache-httpd:1.0.3
resourceRequirements:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 1m
memory: 128Mi
python:
image: >-
ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.41b0
resourceRequirements:
limits:
cpu: 500m
memory: 32Mi
requests:
cpu: 50m
memory: 32Mi
deployment yaml에 annotation을 추가한다.
#..생략
template:
metadata:
creationTimestamp: null
labels:
app: boot-otlp-app
annotations:
prometheus.io/port: '8081'
sidecar.opentelemetry.io/inject: 'true'
prometheus.io/path: /app/actuator/prometheus
prometheus.io/scheme: http
sidecar.istio.io/inject: 'true'
proxy.istio.io/config: |
holdApplicationUntilProxyStarts: true
traffic.sidecar.istio.io/excludeOutboundPorts: '14250,6100'
instrumentation.opentelemetry.io/container-names: boot-otlp-app
instrumentation.opentelemetry.io/inject-java: instrumentation-boot-otlp-app
kiali.io/runtimes: 'springboot-tomcat,springboot-jvm,springboot-jvm-pool'
prometheus.io/scrape: 'true'
이제 파드가 구동될대 istio와 otel sidecar가 붙는다.
이제 스프링부트 설정을 해보자.
spring:
application:
name: fullmooney-tistory-com
profiles:
active: local
sleuth:
otel:
config:
trace-id-ratio-based: 1.0
exporter:
otlp:
endpoint: http://localhost:4317
server:
servlet:
context-path: /app
port: 8081
management:
endpoint:
metrics:
enabled: true
prometheus:
enabled: true
endpoints:
web:
exposure:
include: "*"
metrics:
tags:
application: ${spring.application.name}
prometheus:
metrics:
export:
enabled: true
otlp:
metrics:
export:
enabled: true
tracing:
endpoint: http://localhost:4317
<!-- https://mvnrepository.com/artifact/io.opentelemetry/opentelemetry-sdk -->
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-sdk</artifactId>
<version>1.37.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/io.opentelemetry/opentelemetry-exporter-otlp -->
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
<version>1.37.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.springframework.cloud/spring-cloud-starter-sleuth -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
<version>3.1.9</version>
<exclusions>
<exclusion>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-brave</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- https://mvnrepository.com/artifact/org.springframework.cloud/spring-cloud-sleuth-otel-autoconfigure -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-otel-autoconfigure</artifactId>
<version>1.1.4</version>
</dependency>
<!-- https://mvnrepository.com/artifact/io.opentelemetry/opentelemetry-api -->
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-api</artifactId>
<version>1.37.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/io.micrometer/micrometer-core -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
<version>1.13.6</version>
</dependency>
<!-- https://mvnrepository.com/artifact/io.micrometer/micrometer-registry-prometheus -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>1.13.6</version>
</dependency>
prometheus config에 otel collector endpoint를 추가해보자.
global:
# 생략
scrape_configs:
- job_name: otel-collector
static_configs:
- targets:
- boot-otlp-app.default.svc.cluster.local:8888
prometheus에서 otelcol_process_runtime_heap_alloc_bytes 조회해보자.

728x90
'CloudNative > Observability & Analysis' 카테고리의 다른 글
| vector logging (2) | 2025.01.07 |
|---|---|
| pinpoint plugin config (2) | 2024.11.24 |
| kubecost (2) | 2024.11.11 |
| springboot log pipeline: EFK 로그 전처리와 opensearch (0) | 2024.11.07 |
| springboot log pipeline: EFK setup (opensearch) (1) | 2024.10.28 |