Self Host OllamaGateway

About OllamaGateway

OllamaGateway is built for private AI infrastructure. If you want API authentication, request auditing, virtual models, and team-friendly controls on top of Ollama, local deployment is the recommended way to run it.

Recommended
Run in Docker

First, install Docker. Then run the following commands in a Linux shell:

image=aiursoft/ollamagateway
appName=ollamagateway
sudo docker pull $image
sudo docker run -d --name $appName --restart unless-stopped -p 5000:5000 -v /var/www/$appName:/data $image

That will start a web server at http://localhost:5000 and you can test the app.

The default username is: admin@default.com, and the default password is: admin123.

Container Context
Property Value
Image aiursoft/ollamagateway
Ports 5000
Binary path /app
Data path /data
Config path /data/appsettings.json
Run in Ubuntu

The following script will install/update this app on your Ubuntu server. Supports Ubuntu 25.04.

curl -sL https://github.com/aiursoftweb/ollamaGateway/raw/master/install.sh | sudo bash

Or with a custom port:

curl -sL https://github.com/aiursoftweb/ollamaGateway/raw/master/install.sh | sudo bash -s 8080
Systemd /opt/apps
Why local deployment

Keep prompts, API keys, usage logs, and model routing inside your own infrastructure boundary.

This is especially useful when serving a team, internal tools, or autonomous agents over an OpenAI-compatible endpoint.

Development & Manual Run
Prerequisites
Run Manually
  1. Execute `npm install` at `wwwroot` folder to install the dependencies.
  2. Execute `dotnet run` to run the app.
  3. Visit http://localhost:5000.
Run in Visual Studio
  1. Open the `.sln` file in the project path.
  2. Press `F5` to run the app.