Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/environment/_meta.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,6 @@
"display": "hidden"
},
"aws-credentials": "AWS credentials",
"cold-starts": "Cold starts",
"performances": "Performance"
}
71 changes: 71 additions & 0 deletions docs/environment/cold-starts.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import { Callout } from 'nextra/components';
import { NextSeo } from 'next-seo';

<NextSeo description="Understanding and mitigating cold starts in serverless PHP applications on AWS Lambda with Bref." />

# Cold starts

If your application cannot tolerate any response above 300ms (e.g. real-time trading, multiplayer gaming), Lambda is not the right fit.

For everything else, cold starts are often the most overestimated concern when moving to serverless. On applications with regular traffic, cold starts represent about **0.1% of requests**. For the vast majority of applications, their impact is negligible.

## What is a cold start?

AWS Lambda runs code on-demand. When a new Lambda instance boots to handle a request, the initialization time is called a *cold start*.

Once initialized, the instance stays warm and handles subsequent requests with no cold start. Lambda keeps instances alive for several minutes after the last request. As long as your application receives regular traffic, most requests are handled by warm instances.

You can learn more about how Bref and Lambda work in [How Bref works](/docs/how-it-works), and about how Lambda scales in [Serverless Visually Explained](https://serverless-visually-explained.com/).

## Cold start duration

Bref's PHP runtimes add a cold start of about **250ms** on average. The rest depends on the size of your application.

To put it differently: on average (application with traffic), out of 1000 requests, 999 are as fast as on a traditional server. 1 request has an extra 250ms of latency.

This is on-par with [cold starts in other languages](https://mikhail.io/serverless/coldstarts/aws/) like JavaScript, Python or Go. AWS is [regularly reducing the duration of cold starts](https://levelup.gitconnected.com/aws-lambda-cold-start-language-comparisons-2019-edition-%EF%B8%8F-1946d32a0244), and Bref's runtimes are optimized as much as possible.

## Warming for low-traffic applications

If your application has very low traffic (e.g. a new project or an internal tool), your Lambda functions might scale down to 0 instances. Cold starts will then happen more often.

You can pre-warm your HTTP function by adding a scheduled event in `serverless.yml`:

```yml filename="serverless.yml"
functions:
web:
handler: public/index.php
runtime: php-83-fpm
timeout: 15
events:
- httpApi: '*'
- schedule:
rate: rate(5 minutes)
input:
warmer: true
```

Bref recognizes the `warmer` event and responds with a `Status: 100` in a few milliseconds without executing your application code. This keeps the Lambda instance warm.

You can also use external services like [Pingdom](https://www.pingdom.com/) to ping your application regularly.

## Provisioned concurrency

AWS offers [provisioned concurrency](https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html) to keep a set number of Lambda instances initialized at all times. This completely eliminates cold starts for those instances.

This is useful for applications that need consistently low latency but is more expensive since you pay for the instances even when they are idle. For most applications, the warming approach above is simpler and sufficient.

## Reducing cold start duration

The codebase size can increase the cold start duration. When deploying, exclude unnecessary files in `serverless.yml`:

```yml filename="serverless.yml"
package:
patterns:
- '!assets/**'
- '!node_modules/**'
- '!tests/**'
- ...
```

Read more about this [in the serverless.yml documentation](serverless-yml.mdx#exclusions).
25 changes: 2 additions & 23 deletions docs/environment/performances.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -97,27 +97,6 @@ In that case, be careful with clearing in-memory data between every event.

## Cold starts

Code on AWS Lambda runs on-demand. When a new Lambda instance boots to handle a request, the initialization time is what we call a *cold start*. To learn more, [you can read this article](https://hackernoon.com/im-afraid-you-re-thinking-about-aws-lambda-cold-starts-all-wrong-7d907f278a4f).
On applications with regular traffic, cold starts only represent about **0.1% of requests**. Bref's PHP runtimes add a cold start of about **250ms** on average, which is on-par with other languages.

Bref's PHP runtimes have a base cold start of **250ms** on average. The rest is impacted by the size of your application.

This is on-par with [cold starts in other language](https://mikhail.io/serverless/coldstarts/aws/), like JavaScript, Python or Go. AWS is [regularly reducing the duration of cold starts](https://levelup.gitconnected.com/aws-lambda-cold-start-language-comparisons-2019-edition-%EF%B8%8F-1946d32a0244), and we are also optimizing Bref's runtimes as much as possible.

On a website with low to medium traffic, you can expect cold starts to happen for about 0.3% of the requests.

### Optimizing cold starts

On small websites, cold starts can be avoided by pinging the application regularly. This keeps the lambda instances warm. [Pingdom](https://www.pingdom.com/) or similar services can be used, but you can also [an automatic ping via `serverless.yml`](../use-cases/http/advanced-use-cases#cold-starts).

While the memory size has no impact, the codebase size can increase the cold start duration. When deploying, remember to exclude assets, images, tests and any extra file in `serverless.yml`:

```yml filename="serverless.yml"
package:
patterns:
- '!assets/**'
- '!node_modules/**'
- '!tests/**'
- ...
```

Read more about this [in the serverless.yml documentation](serverless-yml.mdx#exclusions).
Read more in the [Cold starts documentation](cold-starts.mdx).
17 changes: 2 additions & 15 deletions docs/use-cases/http/advanced-use-cases.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -292,19 +292,6 @@ $requestContext = $request->getAttribute('lambda-event')->getRequestContext();

## Cold starts

AWS Lambda automatically destroys Lambda containers that have been unused for 10 minutes. Warming up a new container can take 250ms or more, especially if your application is large. This delay is called [cold start](https://mikhail.io/serverless/coldstarts/aws/).
On applications with regular traffic, cold starts only represent about **0.1% of requests**. For low-traffic applications, you can pre-warm your HTTP function to avoid them.

To mitigate cold starts for HTTP applications, you can periodically send an event to your Lambda including a `{warmer: true}` payload. This will trigger the Lambda function. Bref recognizes this event and immediately responds with a `Status: 100` without executing your code.

You can set up such events using a schedule ([read this article for more details](https://www.jeremydaly.com/lambda-warmer-optimize-aws-lambda-function-cold-starts/)):

```yml filename="serverless.yml"
events:
- httpApi: '*'
- schedule:
rate: rate(5 minutes)
input:
warmer: true
```

You can learn more how AWS Lambda scales and runs in the [Serverless Visually Explained](https://serverless-visually-explained.com/) course.
Read the full [Cold starts documentation](/docs/environment/cold-starts) for details and the warming configuration.
Loading