-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Bug Report
Describe the bug
When trying to upload logs to Aliyun OSS (Aliyun equivalent of S3) using the S3 output plugin, I get the following error:
PutObject API responded with error='SecondLevelDomainForbidden', message='Please use virtual hosted style to access.'
It seems that fluent-bit is using path-style S3 API, while OSS only supports virtual-host-style.
I couldn't find any config option to make fluent-bit use virtual-host-style S3 requests.
To Reproduce
- Example log message if applicable:
Fluent Bit v3.1.3
* Copyright (C) 2015-2024 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
______ _ _ ______ _ _ _____ __
| ___| | | | | ___ (_) | |____ |/ |
| |_ | |_ _ ___ _ __ | |_ | |_/ /_| |_ __ __ / /`| |
| _| | | | | |/ _ \ '_ \| __| | ___ \ | __| \ \ / / \ \ | |
| | | | |_| | __/ | | | |_ | |_/ / | |_ \ V /.___/ /_| |_
\_| |_|\__,_|\___|_| |_|\__| \____/|_|\__| \_/ \____(_)___/
[2025/05/27 13:16:27] [ info] [fluent bit] version=3.1.3, commit=12a9de521c, pid=1
[2025/05/27 13:16:27] [ info] [storage] ver=1.5.2, type=memory, sync=normal, checksum=off, max_chunks_up=128
[2025/05/27 13:16:27] [ info] [cmetrics] version=0.9.1
[2025/05/27 13:16:27] [ info] [ctraces ] version=0.5.2
[2025/05/27 13:16:27] [ info] [input:cpu:cpu.0] initializing
[2025/05/27 13:16:27] [ info] [input:cpu:cpu.0] storage_strategy='memory' (memory only)
[2025/05/27 13:16:27] [ info] [fstore] created root path /tmp/fluent-bit/s3/REDACTED
[2025/05/27 13:16:27] [ info] [output:s3:s3.0] Using upload size 100000000 bytes
[2025/05/27 13:16:27] [ info] [sp] stream processor started
[2025/05/27 13:16:27] [ info] [output:s3:s3.0] worker #0 started
[2025/05/27 13:17:28] [error] [output:s3:s3.0] PutObject API responded with error='SecondLevelDomainForbidden', message='Please use virtual hosted style to access.'
[2025/05/27 13:17:28] [error] [output:s3:s3.0] Raw PutObject response: HTTP/1.1 403 Forbidden
Server: AliyunOSS
Date: Tue, 27 May 2025 13:17:28 GMT
Content-Type: application/xml
Content-Length: 376
Connection: keep-alive
x-amz-request-id: REDACTED
x-oss-ec: 0003-00001401
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SecondLevelDomainForbidden</Code>
<Message>Please use virtual hosted style to access.</Message>
<RequestId>REDACTED</RequestId>
<HostId>oss-cn-beijing-internal.aliyuncs.com</HostId>
<EC>0003-00001401</EC>
<RecommendDoc>https://api.aliyun.com/troubleshoot?q=0003-00001401</RecommendDoc>
</Error>
[2025/05/27 13:17:28] [error] [output:s3:s3.0] PutObject request failed
[2025/05/27 13:17:28] [error] [output:s3:s3.0] Could not send chunk with tag my_cpu
-
Steps to reproduce the problem:
-
Create a bucket in Aliyun OSS
-
Set
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
env vars to the credentials of an Aliyun user who has access to that bucket -
Run fluent-bit with the following config:
[SERVICE] Flush 1 Daemon Off Log_Level info Parsers_File parsers.conf [INPUT] Name cpu Tag my_cpu [OUTPUT] Name s3 Match my_cpu region cn-beijing endpoint https://oss-cn-beijing-internal.aliyuncs.com bucket <BUCKETNAME> s3_key_format /example-dir/example-file-%Y%m%d-%H%M%S_$UUID.log upload_timeout 1m use_put_object true
-
Expected behavior
Fluent-bit's S3 output plugin uses virtual-host-style S3 API when used with Aliyun OSS, either by automatically detecting it, or by providing a config option to switch to that type of API.
Screenshots
N/A
Your Environment
- Version used: 3.1.3 (docker image from
cr.fluentbit.io/fluent/fluent-bit:3.1.3-debug
) - Configuration: as show in "Steps to reproduce"
- Environment name and version (e.g. Kubernetes? What version?): Kubernetes v1.30.7-aliyun.1
- Server type and version: N/A
- Operating System and version: N/A
- Filters and plugins: out_s3, some input plugin (happens with both
in_cpu
andkubernetes_events
)
Additional context
We're trying to use fluent-bit in Kubernetes to upload various logs to S3-equivalent in various clouds we use.
In AWS it works fine, but in Aliyun we hit this error.
For now I found the following workaround:
- insert the bucket name to the endpint URL
- take the first segment of the S3 key and put it in the
bucket
parameter - remove the first segment of the S3 key from the
s3_key_format
parameter
like this:
endpoint https://<BUCKETNAME>.oss-cn-beijing-internal.aliyuncs.com
bucket example-dir
s3_key_format /example-file-%Y%m%d-%H%M%S_$UUID.log
But this is an ugly hack and I'd rather not have to rely on it.