Skip to main content
Version: v2 ⚡

Limits

OpenFn cloud hosted instance has a number of limits that help ensure smooth operation. The table below shows the limits for different plans. For a more detailed list of limits, see the OpenFn pricing page. For self hosted instances, these limits are configurable. See the deployment guide for more details.

FeatureDescriptionDPGFreeCoreGrowthScaleUnlimited
RunsMaximum number of runs allowed per monthUnlimited1002,0005,00010,000Unlimited
Workflow Execution DurationMaximum time a workflow can run before being killedConfigurable60 secs5 mins20 mins30 mins30 mins
Memory UsageMaximum memory allowed per workflow attemptConfigurable128MB256MB512GB1GB1GB
Dataclip SizeMaximum size for data clips persisted run statesConfigurable512KB2MB10MB10MB10MB
AI AssistantMaximum AI tokens availableConfigurable500K1.5M5M10M10M
Data Collections (Storage)Maximum storage for data collectionsConfigurable1MB5MB10MB50MB50MB
Data Collections (Number)Maximum number of data collections per projectConfigurable2510UnlimitedUnlimited
Concurrency ControlAllowing users to control concurrency limits for the projectConfigurableYesYesYesYesYes
Increasing limits for cloud hosted and managed instances

For standard plans, you can increase your limits by upgrading to a higher plan by following the upgrade plan instructions.

For custom limits or upgrades in dedicated deployments, contact enterprise@openfn.org.

Workflow execution duration (1 hour)

Each workflow attempt needs to complete in less than 1 hour. You can view the duration of each attempt by clicking on the attempt ID. If an attempt exceeds this limit, it will be killed by the worker and you'll see a Killed:Timeout badge as your attempt state.

Instance superusers can control this limit the MAX_RUN_DURATION environment variable.

Memory Usage (1GB)

Each workflow attempt may not use more than 1GB of memory. You can view the maximum memory usage of each attempt by clicking on the attempt ID. If an attempt exceeds this limit, it will be killed by the worker and you'll see a Killed:OOM badge as your attempt state.

Instance superusers can control this limit via the MAX_RUN_MEMORY environment variable.

Dataclip Size (10MB)

  1. Each webhook request to a trigger URL cannot exceed 10MB.
  2. If you are persisting the final state of each run as a dataclip, each dataclip may not exceed 10MB.

If you send a payload to a webhook trigger URL which breaches this limit, the server will respond with a 413 error with a :request_entity_too_large message.

If the dataclips produced by the final state of runs and attempts are too large, they will not be persisted. The worker will still process downstream steps but these steps will not be retryable because Lightning won't save a copy of the dataclips. You will see an ERROR: DataClip too large for storage error in your attempt logs.

Instance superusers can control this limit via the MAX_DATACLIP_SIZE environment variable.