Skip to content

Segfault in node:10-alpine(3.10) when running in AWS and unprivileged only #1158

@tomelliff

Description

@tomelliff

Slightly confused by what's happening here but we have a Docker image that was built with node:10-alpine and when this was rebuilt with Alpine 3.10 instead of 3.9 after this was moved to 3.10 we had the container enter a crash loop with a segfault during the startup of the application.

Rolling back to the previous image and also rolling forward with things pinned to node:10-alpine3.9 seems to make this go away. More weirdly I can't reproduce this on non AWS instances but can reliably reproduce on multiple AWS instances. I also noticed that when the container is running with --privileged then it works fine.

Looking at the non debug build core dump it looks to be an issue in musl but I don't yet know what's triggering that without debug symbols:

#0  0x00007fe1375ee07e in ?? () from /lib/ld-musl-x86_64.so.1
#1  0x00007fe1375eb4b6 in ?? () from /lib/ld-musl-x86_64.so.1
#2  0x00007fe134c22b64 in ?? ()
#3  0x0000000000000000 in ?? ()

I'm also very confused why it wouldn't be segfaulting like this when it's ran outside of AWS or when the container is ran as privileged.

Any ideas on how I can debug this further?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions