IMPORTANT NOTE: this guide is mainly intended for local deployment and testing in dev environments. If you wish to deploy to production, you might need some optimizations and extra configuration for n8n
You can build and run a Docker image containing LlamaCloud custom nodes using the Dockerfile or the compose.yaml available in the GitHub repository:
With Docker
curl -L https://raw.githubusercontent.com/run-llama/n8n-llamacloud/master/Dockerfile > Dockerfile
docker build . -t n8n-llamacloud
docker run \
-p 5678:5678 \
--env GENERIC_TIMEZONE="europe/berlin" \
... \ # other env variables
n8n-llamacloud
With Compose
curl -L https://raw.githubusercontent.com/run-llama/n8n-llamacloud/master/Dockerfile > Dockerfile
curl -L https://raw.githubusercontent.com/run-llama/n8n-llamacloud/master/compose.yaml > compose.yaml
docker compose up
In both cases, you should be able to see the n8n instance up and running at http://localhost:5678.