Skip to main content

Self-hosting Nextcloud on a civil-liberties VPS

An operational guide to standing up Nextcloud on a small offshore VPS — choosing the tier, the OS, the deployment shape, the hardening posture, the backup discipline — written for journalists, archivists and NGO IT leads who have decided Google Drive is not the right place for the work.

Nextcloud is the file-collaboration layer most editorial-register customers reach for first when they decide that an in-jurisdiction provider — Google Drive, Microsoft 365, Dropbox — is no longer the right place for the work. It is mature, it is open-source, it is operated by a German non-profit-adjacent foundation rather than a surveillance-capitalised platform, and it self-hosts cleanly on a small VPS. For an investigative team, an NGO with a regional staff, or a documentary project carrying tens of thousands of source files, Nextcloud on a civil-liberties VPS is the working answer to “where does the corpus live now”.

This article sets out the operational shape of standing Nextcloud up on a small offshore VPS. It is for the reader who has decided the migration is the right call (the migration runbook covers the threshold question separately) and now wants the operational specifics: which tier, which OS, which deployment shape, which hardening posture, which backup discipline. It is not a Nextcloud tutorial — the official Nextcloud documentation is the canonical reference for the application-layer steps — but a sober reading of the choices that matter for a privacy-press deployment.

What Nextcloud is, and what it is not

Nextcloud is a self-hostable file-collaboration platform: a folder tree synced across devices via the Nextcloud client, a CalDAV calendar, a CardDAV address book, a contacts directory, an end-to-end-encrypted chat application called Talk, and a marketplace of optional applications (Collabora Online for browser-side document editing, Deck for kanban-style task tracking, Notes for plain-text notes, Mail for IMAP integration). The base install is a PHP application talking to a PostgreSQL or MariaDB database; the optional applications layer on top of that base.

Nextcloud is not a hosted productivity suite in the Google Workspace sense. The Collabora Online integration produces a credible browser-side document editor for the team that needs one, but it is a separate service running in its own container and its own resource budget; it is not the inline document-editing experience that a Google Docs user expects out of the box. The Talk application produces credible E2EE chat and video calls for small groups but it is not a Slack replacement for a hundred-strong organisation. Reading these constraints honestly before the install is the difference between a happy deployment and a frustrated one.

Choosing a tier

For most editorial-register Nextcloud deployments, VPS-2 or VPS-4 is the right tier. The Nextcloud application is not CPU-bound; it is memory-bound (the PHP-FPM worker pool, the PostgreSQL shared buffers, Redis for the lock cache) and storage-bound (the corpus itself plus the per-user metadata). A team of five to twenty people with a working corpus of fifty to a few hundred gigabytes fits comfortably on VPS-2; a team of twenty to a hundred with a corpus running into the low terabytes wants VPS-4 or larger.

The bandwidth allocation matters more than the CPU for Nextcloud. A team that synchronises large files across many client devices — video assets, scanned document corpora, full-resolution images — burns transfer at a meaningful rate. VPS-2’s three-terabyte monthly bandwidth fits a small team; VPS-4’s six-terabyte allocation is the safer floor for a working publication’s daily activity. For projects that anticipate the sustained bandwidth running into double-digit terabytes a month, VPS-8 or a dedicated machine is the honest answer.

The storage figure on the plan page is the working storage. Nextcloud needs additional space for its working files (the trashbin, the version history, the database, the Redis cache) — plan for the corpus to occupy 60-70% of the storage figure rather than 95%. For a corpus that is expected to grow continuously, the sober posture is to provision a tier with headroom rather than to plan migrations every six months.

The OS choice and the filesystem

The operating system choice for a Nextcloud host is one of Debian stable or AlmaLinux — both are conservative, both have well-trodden Nextcloud documentation, both have predictable security-patch cadences. Debian stable is the more common choice in the editorial-register community for reasons of cultural alignment (the Debian project’s social-contract posture and its long history match the privacy-press register) and operational simplicity (the apt package set covers the dependencies cleanly). AlmaLinux is preferred where the operator has prior Red Hat-family operational experience or where SELinux’s targeted policy is a desired addition to the hardening posture.

Full-disk encryption is the default posture and the operator does not require an unencrypted disk at rest. The encryption is done at the LUKS layer with a passphrase the operator does not retain — the customer enters the passphrase at boot via the out-of-band console (or via dropbear-initramfs for unattended remote unlock). The choice of filesystem above LUKS is conventional: ext4 for the working pattern of small-to-medium files; xfs for the case where the corpus is dominated by large multi-gigabyte files (video archives, full-resolution image libraries). Btrfs is operationally workable for the team that has prior btrfs experience; for a first deployment, ext4 is the more conservative answer.

The Nextcloud data directory should sit on the encrypted volume. The PostgreSQL data directory should also sit on the encrypted volume. The Redis state lives in memory and is not at risk at rest. The system journal and the log directory should sit on the encrypted volume; logging Nextcloud activity to an unencrypted partition is a metadata leak.

The deployment shape

For an editorial-register deployment, the operator’s preference is docker compose over the snap or the source-build deployments. Docker Compose makes the rollback story explicit (re-pull the prior image tag), it makes the dependency surface auditable (the compose file is the dependency manifest), and it isolates Nextcloud’s PHP runtime from the host’s package set — the host stays a clean Debian or AlmaLinux install with only Docker and the network stack at the OS layer.

The compose file has four services for a working install: nextcloud (the PHP application running under nginx + php-fpm), postgres (the database), redis (the lock cache and the session store), and traefik or caddy (the reverse proxy with automatic Let’s Encrypt TLS issuance). The official Nextcloud docker-compose template is a reasonable starting point; customisations for the editorial-register deployment are mostly hardening (the next section) and backup discipline (the section after that).

The snap deployment is operationally simpler but carries application-layer constraints — the snap package is opinionated about the storage path, the OS-side snap auto-update cadence is hard to align with the operator’s maintenance windows, and the snap’s PHP runtime is harder to debug when something goes wrong. The snap is a defensible choice for the operator who wants the lowest-effort install and who is willing to accept the constraints; the operator’s preference for editorial-register deployments is the docker-compose route because the long-tail debugging stories favour it.

The source-build deployment is for the operator who wants full visibility into the Nextcloud build and who has the operational capacity to maintain it. It is not the recommendation for a first Nextcloud install.

Hardening the install

The Nextcloud Hardening Guide, published by the Nextcloud Security team, is the canonical reference for the application-layer hardening steps. The editorial-register customer’s posture is to apply the full set rather than to cherry-pick — the costs are operationally low and the failure modes are operationally high.

The specific items that matter for a privacy-press deployment, in approximate priority order: enable two-factor authentication for every account, with TOTP as the default and a recovery code printed once and stored offline; disable the public file-sharing feature for the entire installation if the team’s threat model does not include external collaborators (it can be re-enabled per-folder where needed); enable the brute-force throttling at the application layer; configure the Talk application’s E2EE for any room where the conversation is sensitive; configure the per-user app-password feature so that the desktop and mobile clients sync against time-limited credentials rather than the user’s primary password.

For the encryption-at-rest configuration, the application-layer encryption (the Nextcloud “Default encryption module”) is the wrong answer for most deployments — it adds operational complexity without adding meaningful protection over the LUKS-layer encryption already applied at the disk. The editorial-register posture is to skip the application-layer encryption and to rely on the LUKS layer plus the TLS-in-transit guarantees plus the per-folder password protection that Nextcloud applies for shared links. Where the team’s threat model includes a hosting-operator-side compromise scenario, the application-layer encryption is the right answer; for the privacy-press posture where the operator is part of the trust boundary by design, it is overhead.

The TLS configuration on the reverse proxy is conservative. TLS 1.3 only, HSTS with a meaningful max-age (one year), the security headers (X-Content-Type-Options, X-Frame-Options, Referrer-Policy, Permissions-Policy, Content-Security-Policy) configured to the values the Nextcloud Hardening Guide recommends. The Caddy or Traefik default configuration covers the TLS layer; the security headers are added explicitly.

Backup discipline

A self-hosted Nextcloud is a single point of failure for the corpus it carries until the backup discipline runs. The discipline that protects against this is restic or borgbackup, configured to run daily, encrypted with a key the editor controls and the hosting operator does not, pushing to an off-site destination that is not in the same hosting-jurisdiction as the primary.

The choice of backup destination matters. A backup that lives in the same datacentre as the primary is not a backup — it is a snapshot. A backup that lives in the same hosting-jurisdiction as the primary protects against hardware failure but not against legal action. The conservative posture is to push backups to a destination in a different civil-liberties jurisdiction: a primary in Iceland backed up to Switzerland, or vice versa. Both jurisdictions sustain restic over SFTP or S3-compatible storage cleanly.

The backup cadence is daily for the database and the small-file corpus, with the integrity hash manifest from the migration runbook updated and signed at each backup. The retention policy is the editorial-register customer’s choice but a conservative posture is one daily backup retained for a week, one weekly retained for a month, one monthly retained for a year. Older snapshots can be pruned but the integrity hash manifest covering each retained snapshot stays in the editor’s offline records.

The restore discipline is exercised at least quarterly. A backup that has never been restored is a backup whose integrity is unverified; the operator who has been running daily backups for two years and has never restored from one is operating on a working assumption that the backup process produces restorable data. Quarterly is the operationally workable cadence — small enough that the discipline does not slip, large enough that it does not consume the team’s operational attention.

When NOT to self-host Nextcloud

Self-hosted Nextcloud is the right answer for a project that has the operational capacity to run it: someone on the team owns the deployment, knows where the logs are, knows when the security advisories are published, knows how to apply a security patch out-of-cycle when the situation calls for it, runs the backup-restore drill quarterly. For a project that does not have that operational capacity — a single journalist, a small NGO without dedicated IT staff, a research project where the principal is also the system administrator — self-hosting is overhead that competes with the editorial work.

The honest alternative for that case is a managed Nextcloud hosting provider in a civil-liberties jurisdiction (several exist; the operator does not advertise a partnership with any of them), which removes the operational burden but accepts the trade-off that the managed provider is part of the trust boundary. For the editorial-register customer whose primary trust concern is jurisdictional posture rather than provider-operator visibility, the trade-off is often acceptable.

The other honest alternative is to scope down. Nextcloud is sometimes the wrong tool for the job. A team of three journalists collaborating on a single investigation does not need a Nextcloud install — they need a shared encrypted folder over Syncthing, an E2EE chat over Signal, and a PGP-key infrastructure that they have already set up. Adding Nextcloud to that team’s operational stack is overhead that does not earn its place.

Operational notes

Nextcloud’s release cadence runs on a “Hub” major-version pattern with point releases between hubs. The editorial-register customer’s posture is to track the long-term-support release rather than the latest hub — LTS releases receive security patches for a defined window without the application-layer churn that the latest releases carry. The official Nextcloud security advisory channel publishes patches on a predictable cadence; the operator subscribes to the advisory mailing list and applies relevant patches within the customer’s documented maintenance window.

The Collabora Online integration is the right addition for teams that need browser-side document editing. The integration runs Collabora Online as a separate service (typically in its own container on the same host or on a sibling host), and the Nextcloud Office application bridges the two. For a small team, Collabora Online’s resource budget is modest (one to two gigabytes of memory, low sustained CPU); for a larger team, the resource budget grows roughly linearly with concurrent editing sessions and may justify moving Collabora to a sibling VPS-2 of its own.

The mobile clients (Nextcloud iOS and Android) are mature and widely-deployed; the desktop client (Nextcloud desktop) is mature but has historically had operational rough edges around large-file sync and merge conflicts. For a team whose work product is large files (video, audio, scanned documents), the desktop client’s sync discipline matters more than for a team whose work product is small documents; the discipline is to test the desktop sync against a representative corpus before committing the team to it.

Closing

Self-hosted Nextcloud on a civil-liberties VPS is the working file-collaboration answer for the editorial-register customer who has the operational capacity to run it. The choices that matter — the tier, the OS, the deployment shape, the hardening posture, the backup discipline — are tractable. The operator’s editorial address is the right place to start a conversation about the specific shape of the deployment for a specific team; the runbook above is the generic shape, and every actual deployment is specific.

Payment in Monero, Lightning, on-chain Bitcoin, or cash by post. The operational discipline of payment-rail selection is treated separately in the payment-rails comparative. The brief is to make the file-collaboration layer match the editorial-register posture of the rest of the publishing infrastructure, and Nextcloud on a civil-liberties VPS is most of the work.

More in this register

  1. Leak-aggregator stacks: SecureDrop, GlobaLeaks, OnionShare and the published archive

    A reading of the four-layer architecture of a working leak-aggregator — submission system, processing workstation, archive store, published surface — with notes on which tools fit which layer and why the offshore VPS belongs to the archive end of the stack, not the submission end.

  2. Migrating an investigative archive to offshore hosting

    A fourteen-day sequence for moving an investigative archive — a journalist's corpus, a leak-aggregator's repository, an NGO's case-files — from a commercial host to offshore civil-liberties hosting, with notes on the OPSEC, legal, and chain-of-custody dimensions vendor runbooks omit.