Manage computes
A primary read-write compute is created for your project's default branch.
To connect to a database that resides in a branch, you must connect via a compute associated with the branch. The following diagram shows the project's default branch (main
) and a child branch, both of which have an associated compute.
Neon supports both read-write and read replica computes. A branch can have a single primary read-write compute but supports multiple read replica computes.
Plan limits define resources (vCPUs and RAM) available to a compute. The Neon Free Plan provides a shared vCPU and up to 1 GB of RAM per compute. Paid plans support larger compute sizes and autoscaling.
View a compute
A compute is associated with a branch. To view a compute, select Branches in the Neon Console, and select a branch. If the branch has a compute, it is shown on the Computes tab on the branch page.
Compute details shown on the branch page include:
- The type of compute, which can be Primary (read-write) or Read Replica (read-only).
- The compute status, typically Active or Idle.
- Compute ID: The compute ID, which always starts with an
ep-
prefix; for example:ep-quiet-butterfly-w2qres1h
- Size: The size of the compute. Users on paid plans can configure the amount of vCPU and RAM for a compute when creating or editing a compute. Shows autoscaling minimum and maximum vCPU values if autoscaling is enabled.
- Last active: The date and time the compute was last active.
Create a compute
You can only create a primary read-write compute for a branch that does not have one, but a branch can have multiple read replica computes.
To create an endpoint:
- In the Neon Console, select Branches.
- Select a branch.
- Click Add a compute or Add Read Replica if you already have a primary read-write compute.
- On the Add new compute dialog, specify your compute settings, including compute type, size, autoscaling, and scale to zero settings, and click Create. Selecting the Read replica compute type creates a read replica.
Edit a compute
You can edit a compute to change the compute size or scale to zero configuration.
To edit a compute:
-
In the Neon Console, select Branches.
-
Select a branch.
-
From the Compute tab, select Edit for the compute you want to edit.
The Edit window opens, letting you modify settings such as compute size, the autoscaling configuration (if applicable), and your scale to zero setting.
-
Once you've made your changes, click Save. All changes take immediate effect.
For information about selecting an appropriate compute size or autoscaling configuration, see How to size your compute.
What happens to the compute when making changes
Some key points to understand about how your endpoint responds when you make changes to your compute settings:
- Changing the size of your fixed compute restarts the endpoint and temporarily disconnects all existing connections.
note
When your compute resizes automatically as part of the autoscaling feature, there are no restarts or disconnects; it just scales.
- Editing minimum or maximum autoscaling sizes also requires a restart; existing connections are temporarily disconnected.
- Changes to scale to zero settings do not require an endpoint restart; existing connections are unaffected.
- If you disable scale to zero entirely, you will need to restart your compute manually to get the latest compute-related release updates from Neon. See Restart a compute.
To avoid prolonged interruptions resulting from compute restarts, we recommend configuring your clients and applications to reconnect automatically in case of a dropped connection.
Compute size and autoscaling configuration
Users on paid plans can change compute size settings when editing a compute.
Compute size is the number of Compute Units (CUs) assigned to a Neon compute. The number of CUs determines the processing capacity of the compute. One CU has 1 vCPU and 4 GB of RAM, 2 CUs have 2 vCPUs and 8 GB of RAM, and so on. The amount of RAM in GB is always 4 times the vCPUs, as shown in the table below.
Compute size (in CUs) | vCPU | RAM |
---|---|---|
.25 | .25 | 1 GB |
.5 | .5 | 2 GB |
1 | 1 | 4 GB |
2 | 2 | 8 GB |
3 | 3 | 12 GB |
4 | 4 | 16 GB |
5 | 5 | 20 GB |
6 | 6 | 24 GB |
7 | 7 | 28 GB |
8 | 8 | 32 GB |
9 | 9 | 36 GB |
10 | 10 | 40 GB |
Neon supports fixed-size and autoscaling compute configurations.
- Fixed size: You can use the slider to select a fixed compute size. A fixed-size compute does not scale to meet workload demand.
- Autoscaling: You can also use the slider to specify a minimum and maximum compute size. Neon scales the compute size up and down within the selected compute size boundaries to meet workload demand. For information about how Neon implements the Autoscaling feature, see Autoscaling.
info
The neon_utils
extension provides a num_cpus()
function you can use to monitor how the Autoscaling feature allocates compute resources in response to workload. For more information, see The neon_utils extension.
How to size your compute
The size of your compute determines the amount of frequently accessed data you can cache in memory and the maximum number of simultaneous connections you can support. As a result, if your compute size is too small, this can lead to suboptimal query performance and connection limit issues.
In Postgres, the shared_buffers
setting defines the amount of data that can be held in memory. In Neon, the shared_buffers
parameter is always set to 128 MB, but Neon uses a Local File Cache (LFC) to extend the amount of memory available for caching data. The LFC can use up to 80% of your compute's RAM.
The Postgres max_connections
setting defines your compute's maximum simultaneous connection limit and is set according to your compute size. Larger computes support higher maximum connection limits.
The following table outlines the vCPU, RAM, LFC size (80% of RAM), and the max_connections
limit for each compute size that Neon supports.
Min. Compute Size (CU) | vCPU | RAM | LFC size | max_connections |
---|---|---|---|---|
0.25 | 0.25 | 1 GB | 0.8 GB | 112 |
0.50 | 0.50 | 2 GB | 1.6 GB | 225 |
1 | 1 | 4 GB | 3.2 GB | 450 |
2 | 2 | 8 GB | 6.4 GB | 901 |
3 | 3 | 12 GB | 9.6 GB | 1351 |
4 | 4 | 16 GB | 12.8 GB | 1802 |
5 | 5 | 20 GB | 16 GB | 2253 |
6 | 6 | 24 GB | 19.2 GB | 2703 |
7 | 7 | 28 GB | 22.4 GB | 3154 |
8 | 8 | 32 GB | 25.6 GB | 3604 |
9 | 9 | 36 GB | 28.8 GB | 4000 |
10 | 10 | 40 GB | 32 GB | 4000 |
When selecting a compute size, ideally, you want to keep as much of your dataset in memory as possible. This improves performance by reducing the amount of reads from storage. If your dataset is not too large, select a compute size that will hold the entire dataset in memory. For larger datasets that cannot be fully held in memory, select a compute size that can hold your working set. Selecting a compute size for a working set involves advanced steps, which are outlined below. See Sizing your compute based on the working set.
Regarding connection limits, you'll want a compute size that can support your anticipated maximum number of concurrent connections. If you are using Autoscaling, it is important to remember that your max_connections
setting is based on the minimum compute size in your autoscaling configuration. The max_connections
setting does not scale with your compute. To avoid the max_connections
constraint, you can use a pooled connection with your application, which supports up to 10,000 concurrent user connections. See Connection pooling.
Sizing your compute based on the working set
If it's not possible to hold your entire dataset in memory, the next best option is to ensure that your working set is in memory. A working set is your frequently accessed or recently used data and indexes. To determine whether your working set is fully in memory, you can query the cache hit ratio for your Neon compute. The cache hit ratio tells you how many queries are served from memory. Queries not served from memory bypass the cache to retrieve data from Neon storage (the Pageserver), which can affect query performance.
As mentioned above, Neon computes use a Local File Cache (LFC) to extend Postgres shared buffers. You can monitor the Local File Cache hit rate and your working set size from Neon's Monitoring page, where you'll find the following charts:
Neon also provides a neon extension with a neon_stat_file_cache
view that you can use to query the cache hit ratio for your compute's Local File Cache. For more information, see The neon extension.
Autoscaling considerations
Autoscaling is most effective when your data (either your full dataset or your working set) can be fully cached in memory on the minimum compute size in your autoscaling configuration.
Consider this scenario: If your data size is approximately 6 GB, starting with a compute size of .25 CU can lead to suboptimal performance because your data cannot be adequately cached. While your compute will scale up from .25 CU on demand, you may experience poor query performance until your compute scales up and fully caches your working set. You can avoid this issue if your minimum compute size can hold your working set in memory.
As mentioned above, your max_connections
setting is based on the minimum compute size in your autoscaling configuration and does not scale along with your compute. To avoid this max_connections
constraint, you can use a pooled connection for your application. See Connection pooling.
Scale to zero configuration
Neon's Scale to Zero feature automatically transitions a compute into an idle state after a period of inactivity, also known as "scale-to-zero". By default, suspension occurs after 5 minutes of inactivity, but this delay can be adjusted. For instance, you can increase the delay to reduce the frequency of suspensions, or you can disable scale to zero completely to maintain an "always-active" compute. An "always-active" configuration eliminates the few seconds of latency required to reactivate a compute but is likely to increase your compute time usage.
The maximum scale to zero setting is 7 days. For more information, refer to Configuring scale to zero for Neon computes.
important
If you disable autosuspension entirely or your compute is never idle long enough to be automatically suspended, you will have to manually restart your compute to pick up the latest updates to Neon's compute images. Neon typically releases compute-related updates weekly. Not all releases contain critical updates, but a weekly compute restart is recommended to ensure that you do not miss anything important. For how to restart a compute, see Restart a compute.
Restart a compute
It is sometimes necessary to restart a compute. For example, if you upgrade to a paid plan account, you may want to restart your compute to immediately apply your upgraded limits, or maybe you've disabled autosuspesion and want to restart your compute to pick up the latest compute-related updates, which Neon typically releases weekly.
important
Please be aware that restarting a compute interrupts any connections currently using the compute. To avoid prolonged interruptions resulting from compute restarts, we recommend configuring your clients and applications to reconnect automatically in case of a dropped connection.
You can restart a compute using one of the following methods:
- Stop activity on your compute (stop running queries) and wait for your compute to suspend due to inactivity. By default, Neon suspends a compute after 5 minutes of inactivity. You can watch the status of your compute on the Branches page in the Neon Console. Select your branch and monitor your compute's Status field. Wait for it to report an
Idle
status. The compute will restart the next time it's accessed, and the status will change toActive
. - Issue a Restart endpoint call using the Neon API. You can do this directly from the Neon API Reference using the Try It! feature or via the command line with a cURL command similar to the one shown below. You'll need your project ID, compute endpoint ID, and an API key.
- Users on paid plans can temporarily set a compute's scale to zero setting to a low value to initiate a suspension (the default setting is 5 minutes). See Scale to zero configuration for instructions. After doing so, check the Operations page in the Neon Console. Look for
suspend_compute
action. Any activity on the compute will restart it, such as running a query. Watch for astart_compute
action on the Operations page.
Delete a compute
Deleting a compute is a permanent action.
To delete a compute :
- In the Neon Console, select Branches.
- Select a branch.
- From the Compute tab, click Edit for the compute you want to delete.
- At the bottom of the Edit compute settings drawer, click Delete compute.
Manage computes with the Neon API
Compute actions performed in the Neon Console can also be performed using the Neon API. The following examples demonstrate how to create, view, update, and delete computes using the Neon API. For other compute-related API methods, refer to the Neon API reference.
note
The API examples that follow may not show all of the user-configurable request body attributes that are available to you. To view all attributes for a particular method, refer to method's request body schema in the Neon API reference.
The jq
option specified in each example is an optional third-party tool that formats the JSON
response, making it easier to read. For information about this utility, see jq.
Prerequisites
A Neon API request requires an API key. For information about obtaining an API key, see Create an API key. In the cURL examples below, $NEON_API_KEY
is specified in place of an actual API key, which you must provide when making a Neon API request.
Create a compute with the API
The following Neon API method creates a compute.
The API method appears as follows when specified in a cURL command. The branch you specify cannot have an existing compute. A compute must be associated with a branch. Neon supports read-write and read replica compute. A branch can have a single primary read-write compute but supports multiple read replica computes.
Response body
List computes with the API
The following Neon API method lists computes for the specified project. A compute belongs to a Neon project. To view the API documentation for this method, refer to the Neon API reference.
The API method appears as follows when specified in a cURL command:
Response body
Update a compute with the API
The following Neon API method updates the specified compute. To view the API documentation for this method, refer to the Neon API reference.
The API method appears as follows when specified in a cURL command. The example reassigns the compute to another branch by changing the branch_id
. The branch that you specify cannot have an existing compute. A compute must be associated with a branch, and a branch can have only one primary read-write compute. Multiple read-replica computes are allowed.
Response body
Delete a compute with the API
The following Neon API method deletes the specified compute. To view the API documentation for this method, refer to the Neon API reference.
The API method appears as follows when specified in a cURL command.
Response body
Compute-related issues
This section outlines compute-related issues you may encounter and possible resolutions.
No space left on device
You may encounter an error similar to the following when your compute's local disk storage is full:
Neon computes allocate approximately 20 GB of local disk space for temporary files used by Postgres. Data-intensive operations can sometimes consume all of this space, resulting in No space left on device
errors.
To resolve this issue, you can try the following strategies:
- Identify and terminate resource-intensive processes: These could be long-running queries, operations, or possibly sync or replication activities. You can start your investigation by listing running queries by duration.
- Optimize queries to reduce temporary file usage.
- Adjust pipeline settings for third-party sync or replication: If you're syncing or replicating data with an external service, modify the pipeline settings to control disk space usage.
If the issue persists, refer to our Neon Support channels.
Compute is not suspending
In some cases, you may observe that your compute remains constantly active for no apparent reason. Possible causes for a constantly active compute when not expected include:
- Connection requests: Frequent connection requests from clients, applications, or integrations can prevent a compute from suspending automatically. Each connection resets the scale to zero timer.
- Background processes: Some applications or background jobs may run periodic tasks that keep the connection active.
Possible steps you can take to identify the issues include:
-
Checking for active processes
You can run the following query to identify active sessions and their states:
Look for processes initiated by your users, applications, or integrations that may be keeping your compute active.
-
Review connection patterns
- Ensure that no applications are sending frequent, unnecessary connection requests.
- Consider batching connections if possible, or use connection pooling to limit persistent connections.
-
Optimize any background jobs
If background jobs are needed, reduce their frequency or adjust their timing to allow Neon's scale to zero feature to activate after the defined period of inactivity (the default is 5 minutes). For more information, refer to our Scale to zero guide.
Need help?
Join our Discord Server to ask questions or see what others are doing with Neon. Users on paid plans can open a support ticket from the console. For more details, see Getting Support.