Conversational AI systems often are composed of multiple applications. While most of this series has focused on using Docker on a single, standalone API, the last post introduced LibreChat, which used multiple services like Ollama. Common parts of a conversational AI application include a frontend to interact with the chat, a backend to facilitate conversations, and a database to store content and analytics. However, other components can make your application even more robust and capable. It can be hard to manage the relationships between these services and get them working together. Docker compose simplifies the management of such services of a bigger application by allowing you to define and run them in a single configuration file.
Setting up Docker Compose
In the previous articles of this series (Part 1, Part 2, and Part 3), we’ve been developing an API. We will continue to work with that API here, but we will improve it. We will incorporate simple logging of messages. This means we will need to add a database connection to our application.
Before diving into that new feature, let’s reorganize the project. As the project grows, each new service should live in its own folder. That will help distinguish between the pieces of the broader application. To start with, create a sub-folder called “api” for the API service and move the existing files inside.
├── docker_tutorial
│ ├── api
│ ├── app.py
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── .env
Now, in the root folder of our project (docker_tutorial), add a Docker Compose file called compose.yaml.
├── docker_tutorial
│ ├── api
│ ├── app.py
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── compose.yaml
│ ├── .env
The compose.yaml file is where you need to define the services that Docker Compose should act upon. Build out a new service by telling 1) the name of the service, 2) where it is, 3) the ports it should work on, and 4) the environment variables necessary for it to run.
services:
api:
build:
context: ./api
dockerfile: Dockerfile
container_name: docker_tutorial_api
ports:
- 7860:7860
env_file:
- .env
This Docker Compose service is listed as “api”. When it runs, it will use the container_name “docker_tutorial_api”. Being able to define names can help you keep track of your services when they are running.
The build section helps you set Docker Compose to work with your project structure. The context sets the working directory for the API service to the api folder. You could use another path, though, such as the project root. The dockerfile setting tells which Dockerfile to use. You can use different Dockerfiles for different purposes, such as creating a specific set of test data or passing in different environment variables.
With this set up, you can start the API service. I like to add “—build” behind it to ensure it builds from scratch.
>>> docker compose up
To stop the service, you can type the following in a different terminal window.
>>> docker compose down
Adding a Logging Service
To support logging, the API needs to be able to talk with a database. Postgres already provides a fully prepared Docker image you can use, so you do not need to build that service yourself. You just need to reference the existing Docker image version.
As with the API, to define the postgres service, you need to tell the ports and give the service a name. However, instead of a directory, specify the image Docker Compose should use. Also, instead of using the .env file, you can assign environment variables manually. You should not put sensitive values here because they will be pushed up to your repository. In this case, the database password and user are clearly meant for testing or local development, so these are less risky to include. Defining these values like this can help our teammates get up and running locally faster.
services:
postgres:
image: postgres
container_name: docker_tutorial_postgres
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: postgres
api:
# same as before...
The API service will need to know the connection string that it should use to connect to that database. In addition to reading the .env file, you can add specific environment variables. Since the connection string will not change in our Docker Compose service, it is fine to hard code this value.
services:
# postgres service...
api:
build:
context: ./api
dockerfile: Dockerfile
container_name: docker_tutorial_api
ports:
- 7860:7860
environment:
CONNECTION_STRING: postgresql://postgres:password@docker_tutorial_api:5432/postgres
env_file:
- .env
Now, both services are set up. The logic the API needs to actually connect to the database is not ready, but you can still test the services to make sure both start up correctly. As the services spin up, can you see a potential problem?
>>> docker compose up
Both services spun up at the same time. That is very problematic because the API service will depend on the database being available. The logging logic that we need to write and test will encounter an issue if the database is not ready.
docker_tutorial_postgres | creating configuration files ... ok
docker_tutorial_postgres | running bootstrap script ... ok
docker_tutorial_postgres | performing post-bootstrap initialization ... ok
docker_tutorial_api | INFO: Started server process [1]
docker_tutorial_api | INFO: Waiting for application startup.
docker_tutorial_api | INFO: Application startup complete.
docker_tutorial_api | INFO: Uvicorn running on http://0.0.0.0:7860 (Press CTRL+C to quit)
docker_tutorial_postgres | syncing data to disk ... ok
docker_tutorial_postgres |
docker_tutorial_postgres |
docker_tutorial_postgres | Success. You can now start the database server
You need to tell Docker Compose that postgres should be ready before spinning up the API. To do that, you can add a check that the database is ready. In this case, Docker Compose will send a test command to the database regularly until it passes or the retries are exhausted. Once the health check has passed, the API service will be able to start up.
services:
postgres:
image: postgres
container_name: docker_tutorial_postgres
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: postgres
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres", "-d", "postgres"]
interval: 5s
retries: 5
start_period: 10s
timeout: 5s
api:
build:
context: ./api
dockerfile: Dockerfile
container_name: docker_tutorial_api
ports:
- 7860:7860
environment:
CONNECTION_STRING: postgresql://postgres:password@docker_tutorial_api:5432/postgres
env_file:
- .env
depends_on:
postgres:
condition: service_healthy
This health check is a good approach and the one we’ll use going forward. It keeps the check in the Docker Compose file and is pretty simple to understand. However, you may have more complex requirements for the database that should happen before a service runs. In this case, you can create your own script(s) that can run prior to the Docker command starting your application.
The wait-for-it script is a helpful tool to wait for a service to be ready. It listens on a host and post for a service to be available. To use it, you can copy the wait-for-it.sh to your repository. Then, you would add the following steps to your Dockerfile for the API service. Notice that the “docker_tutorial_postgres” matches the name of the postgres service in the Docker Compose.
RUN apk update && apk add bash # Install bash
COPY wait-for-it.sh wait-for-it.sh
RUN chmod +x wait-for-it.sh
CMD wait-for-it.sh docker_tutorial_postgres:5432 -- uvicorn app:app --host 0.0.0.0 --port 7860
Adding the Logging Logic
Docker Compose is working and can run the API and database in the correct order. Now, it’s time to add the logging feature. The API should log each message that comes in to the system. To facilitate the database interactions, you need to add some more packages to the requirements.txt- psycopg2-binary and sqlalchemy.
fastapi[standard]
psycopg2-binary # New
python-dotenv
requests
sqlalchemy # New
To keep the focus on Docker Compose, I’ve added the database logic to the app.py file we’ve been working with throughout this series. It will basically create the “messages” table if it isn’t present and save messages to that table. The current implementation only saves the response to a user’s message. How might you adjust this implementation to save the user message and the response?
import logging
import os
import requests
from dotenv import load_dotenv
from fastapi import FastAPI, Depends, HTTPException, status
from fastapi.security import OAuth2PasswordBearer
from pydantic import BaseModel
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.exc import SQLAlchemyError
load_dotenv()
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
CONNECTION_STRING = os.environ.get("CONNECTION_STRING")
app = FastAPI()
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
logging.basicConfig(level=logging.INFO)
class QueryRequest(BaseModel):
user_query: str
Base = declarative_base()
engine = create_engine(CONNECTION_STRING)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
class Message(Base):
__tablename__ = "messages"
id = Column(Integer, primary_key=True, index=True)
text = Column(String, nullable=False)
def initialize_database():
Base.metadata.create_all(bind=engine)
@app.on_event("startup")
async def startup():
initialize_database()
@app.get("/")
async def root():
return {"message": "Hello World!"}
@app.post("/chat")
async def chat(request: QueryRequest, token: str = Depends(oauth2_scheme)):
VALID_TOKEN = os.environ.get("MY_API_TOKEN")
if token != VALID_TOKEN:
raise HTTPException(status_code=401)
openai_endpoint = "https://api.openai.com/v1/chat/completions"
headers = {
"Authorization": f"Bearer {OPENAI_API_KEY}",
"Content-Type": "application/json",
}
payload = {
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": request.user_query},
],
"max_tokens": 100,
}
response = requests.post(openai_endpoint, headers=headers, json=payload).json()
result = response
completion_text = result["choices"][0]["message"]["content"]
session = SessionLocal()
try:
session.add(Message(text=completion_text))
session.commit()
logging.info("Message saved to database")
except SQLAlchemyError as error:
logging.error(f"Database error: {error}")
session.rollback()
finally:
session.close()
return {"response": completion_text}
With this all set up, start the application.
>>> docker compose up
After the individual services have finished spinning up, send a test request to the API. You should see a message that the database saving and endpoint response were successful.
>>> import requests
>>> headers = {"Authorization": f"Bearer {YOUR-API-TOKEN}, "Content-Type": "application/json"}
>>> resp = requests.post(url='http://localhost:7860/chat', json={'user_query':'hi'}, headers=headers)
docker_tutorial_api | INFO:root:Message saved to database
docker_tutorial_api | INFO: 172.18.0.1:49910 - "POST /chat HTTP/1.1" 200 OK
Docker Compose is a powerful tool for getting an environment composed of multiple services up and running quickly. It provides all the benefits Docker offers, such as separation from your local environment. Moreover, setting up Docker Compose can help teammates run your application with less hassle than if they set up the application from scratch. Beyond ease, though, Docker Compose can improve the quality of your application. Having the whole system available allows for more robust end-to-end testing. Centralizing configuration into predictable files helps with things like versioning and experimenting with configurations. Docker Compose is a valuable tool for developing conversational AI applications.