-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to deploy, function code combined with layers exceeds the maximum allowed size of 262144000 bytes #1733
Comments
The collector layer package size being that big does not seem realistic to me. I just checked the size for the latest version of the collector layer after building it locally.
This is output in bytes, so total would be 44974607 bytes or ~43KiB Are there any other details about your setup we should know about? Are the otel layers the only ones you use? Or do you have other additional layers? |
Hi wpessers, Thanks for replying, Very interesting, and that 44Meg is unzipped, I think AWS have a 250Meg limit of the zip size. As you say, something doesn't add up. More/Recap details... Otel collector is the only layer I have.
Gives "CodeSize": 13907741 Likewise, when I get info on my deployed lambda WITHOUT the layer, I see "CodeSize": 57307340, zipped I just tried manually adding the layer via the aws lambda console to this function, same error. -- I guess this is very likely the node_modules pushing over the limit. It's a shame that neither serverless nor aws help you here 'I see your node modules are too big, aws node base image comes with x y and z and you don't need to have them in your bundle'. It's like almost free lambda is a gateway drug to spending more and more money on aws. Any more than a hello world and things get tricksie. There is a certain irony to the fact that trying to add Observability to my code falls over due to the hidden and obscure ways that aws works. |
So are you bundling your code using something like esbuild or webpack? How big is your own deployment package after bundling (before compression)? |
Hi @LazyBrush, I think, even though your Lambda function zipped size (without OTEL collector layer) is 54.7 MB, it is very close to 250 MB after unzipped. 250 MB Lambda artifact size limit is the maximum function code size limit including with layer after unzip. To verify that, can you measure the unzipped size of your Lambda function artifact? |
Describe the bug
Using nestjs with sls to deploy a lambda with otel. I'm using the collector layer to then shuffle telemetry to a S3 bucket for later offline inspection.
When I deploy without the opentelemetry-collector-amd64-0_13_0:1 image layer the AWS console for the lambda says the bundle size is 54.7MB, when I add the layer sls fails to deploy with the error maximum allowed size of 262144000 bytes, the actual size is 264975312 bytes.
If we believe these numbers then it looks like the collector layer is about 210MB or over 80% of max lambda size, some of the documentation says 100MB+
Steps to reproduce
nestjs app as lambda, using otel, deployed with serverless (sls).
What did you expect to see?
My app deployed, nice otel tracing and my heart beating with joy.
What did you see instead?
sls failed to deploy, me crying inside.
What version of collector/language SDK version did you use?
xxx:layer:opentelemetry-collector-amd64-0_13_0:1
What language layer did you use?
Perhaps I am not familiar with the language layer, my guess this give auto instrumentation that wraps your actual code. Instead, I've build my nestjs app with otel sdk, so I am creating spans/metrics/logs manually, so my guess is I do not need a language layer?
Additional context
I can run the nestjs code locally with a docker compose collector and the suite of tools like jaeger, prometheus, etc.
I tried to use webpack to reduce the image size.
I also went down the path of trying to use a container for the lambda, but then layers are not supported, so I would not have the collector layer to shuffle the telemetry to S3.
My main aim is to have a minimal cost solution but still have observability. My thoughts were I could run a lambda without the traditional jaeger/prom/etc tools, just write to S3 logs and then offline or when there was an issue I could spin up those tools on my dev machine to then inspect data in S3.
Happy to have suggestions how best to proceed
Thank you!
The text was updated successfully, but these errors were encountered: