-
Notifications
You must be signed in to change notification settings - Fork 649
Closed
Description
Describe the bug
S3 Object storage not working
To Reproduce
Steps to reproduce the behavior:
auth_enabled: true
server:
http_listen_port: 3100
distributor:
receivers: # this configuration will listen on all ports and protocols that tempo is capable of.
jaeger: # the receives all come from the OpenTelemetry collector. more configuration information can
protocols: # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/master/receiver
thrift_http: #
grpc: # for a production deployment you should only enable the receivers you need!
thrift_binary:
thrift_compact:
zipkin:
otlp:
protocols:
http:
grpc:
opencensus:
ingester:
trace_idle_period: 10s # the length of time after a trace has not received spans to consider it complete and flush it
traces_per_block: 100 # cut the head block when it his this number of traces or ...
max_block_duration: 5m # this much time passes
compactor:
compaction:
compaction_window: 1h # blocks in this time window will be compacted together
max_compaction_objects: 1000000 # maximum size of compacted blocks
block_retention: 1h
compacted_block_retention: 10m
flush_size_bytes: 5242880
storage:
trace:
backend: s3 # backend configuration to use
wal:
path: C:\Users\nadeem\vsc-workspaces\tempo # where to store the the wal locally
bloom_filter_false_positive: .05 # bloom filter false positive rate. lower values create larger filters but fewer false positives
index_downsample: 10 # number of traces per index record
s3:
bucket: tempo # how to store data in s3
access_key: <Your Key>
secret_key: <Your secret>
insecure: true
endpoint: s3-core.com
region: us_east_2
pool:
max_workers: 100 # the worker pool mainly drives querying, but is also used for polling the blocklist
queue_depth: 10000
docker run --rm -p 6831:6831/udp -p 9411:9411 -p 3100:3100 --name tempo -v ${pwd}\tempo-s3.yaml:/etc/tempo-local.yaml --network docker-tempo grafana/tempo:latest --config.file=/etc/tempo-local.yaml
failed parsing config: failed to parse configFile /etc/tempo-local.yaml: yaml: unmarshal errors:
line 23: field traces_per_block not found in type ingester.Config
line 47: field region not found in type s3.Config
docker run --rm -p 6831:6831/udp -p 9411:9411 -p 3100:3100 --name tempo -v ${pwd}\tempo-s3.yaml:/etc/tempo-local.yaml --network docker-tempo grafana/tempo:c2989f6d
Unable to find image 'grafana/tempo:c2989f6d' locally
c2989f6d: Pulling from grafana/tempo
31603596830f: Already exists e1e22e13b28b: Pull complete c958de5d7379: Pull complete Digest: sha256:446811b6510c7165f81f4b1920e27621638704976048661d0b4f94c8cde951b7
Status: Downloaded newer image for grafana/tempo:c2989f6d
level=info ts=2021-01-29T14:48:31.852512757Z caller=main.go:89 msg="Starting Tempo" version="(version=c2989f6d, branch=master, revision=c2989f6d)"
level=error ts=2021-01-29T14:48:31.852585957Z caller=main.go:92 msg="error running Tempo" err="failed to init module services error initialising module: store: failed to create store unknown backend "
docker run --rm -p 6831:6831/udp -p 9411:9411 -p 3100:3100 --name tempo -v ${pwd}\tempo-s3.yaml:/etc/tempo-local.yaml --network docker-tempo grafana/tempo:0.5.0
Unable to find image 'grafana/tempo:0.5.0' locally
0.5.0: Pulling from grafana/tempo
31603596830f: Already exists 0e205338ddfb: Pull complete d8ed4f3a7f3f: Pull complete Digest: sha256:e518cf6fba518dd3ab63cbed7833047e80d73e6a6ee72ee4be1373379b366f3c
Status: Downloaded newer image for grafana/tempo:0.5.0
level=info ts=2021-01-29T14:57:42.563062539Z caller=main.go:89 msg="Starting Tempo" version="(version=0.5.0, branch=HEAD, revision=8744e919)"
level=info ts=2021-01-29T14:57:42.563343439Z caller=server.go:225 http=[::]:80 grpc=[::]:9095 msg="server listening on addresses"
level=info ts=2021-01-29T14:57:42.563459038Z caller=frontend.go:18 msg="creating tripperware in query frontend to shard queries"
level=error ts=2021-01-29T14:57:42.563558238Z caller=main.go:92 msg="error running Tempo" err="failed to init module services error initialising module: store: failed to create store unknown backend "
Expected behavior
S3 storage should be working
Environment:
- Infrastructure: Docker laptop and openshift
- Deployment tool: Manual and Jenkins
Additional Context
Metadata
Metadata
Assignees
Labels
No labels