If your VPS is purely for Lemmy, I'd suggest blowing it away and using the ansible playbook referenced here. I found the current docker-compose does not function, there are broken references to external nginx configs.
I found the current docker-compose does not function, there are broken references to external nginx configs.
I ran into problems too. Does this posting hint at what is wrong? (The container names need to match hostnames?) https://lemmy.ml/post/1167448
Thats a different issue to what I encountered. For me the nginx docker config had a reference to the host nginx config... I had no nginx installed so "docker-compose up" failed.
Thanks for the tip. My VPS currently has several containers with services running that I use for myself and friend group. Though I DID try to ansible to a blank local VM for testing, and I couldnt really even get the Ansible installation working on my control node. I am not very well versed in Linux yet. I scrape by for most things, but much yet eludes me.
I am trying to stick it out with Docker for now. My NGINX Proxy manager seems to work fine for proxying my partially broken setup currently, and I dont think I will have any issues once I can work out the kinks in the other containers.
test
Most likely it is a nginx reverse-proxy issue. I would recommend to get rid of the nginx in the docker-compose if you still have that and directly proxy the Lemmy backend and Lemmy-ui via the system Nginx in a similar fashion to the Ansible script nginx example.
But it's really hard to do "remote" setup support like this, so you will have to experiment a bit yourself.
I am not an NGINX expert by any means. The instance is reachable to the lemmy-ui via the proxy. I can "Sign up" and search for communities and such, but it seems like the backend is failing. Maybe an issues between lemmy and postgres?
More likely a websocket failure. I heard from another project that uses websockets for the frontend to communicate with the backend that Nginx proxy manager seems to have issues with websockets even if they are enabled via that toggle in the UI. But no real idea what the issue might be.
I hear issues with Nginx proxy manager all the time, but obviously it attracts a certain type of user, so it might not be the tool's fault after all.
Working Setup files, for my ARM64 Ubuntu host server. The postgres, lemmy, lemmy-ui, and pictrs containers all are on the lemmyinternal network only. The nginx:1-alpine container is in both networks. docker-compose.yml
spoiler
version: "3.3"
# JatNote = Note from Jattatak for working YML at this time (Jun8,2023)
networks:
# communication to web and clients
lemmyexternalproxy:
# communication between lemmy services
lemmyinternal:
driver: bridge
#JatNote: The Internal mode for this network is in the official doc, but is what broke my setup
# I left it out to fix it. I advise the same.
# internal: true
services:
proxy:
image: nginx:1-alpine
networks:
- lemmyinternal
- lemmyexternalproxy
ports:
# only ports facing any connection from outside
# JatNote: Ports mapped to nonsense to prevent colision with NGINX Proxy Manager
- 680:80
- 6443:443
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
# setup your certbot and letsencrypt config
- ./certbot:/var/www/certbot
- ./letsencrypt:/etc/letsencrypt/live
restart: always
depends_on:
- pictrs
- lemmy-ui
lemmy:
#JatNote: I am running on an ARM Ubuntu Virtual Server. Therefore, this is my image. I suggest using matching lemmy/lemmy-ui versions.
image: dessalines/lemmy:0.17.3-linux-arm64
hostname: lemmy
networks:
- lemmyinternal
restart: always
environment:
- RUST_LOG="warn,lemmy_server=info,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info"
volumes:
- ./lemmy.hjson:/config/config.hjson
depends_on:
- postgres
- pictrs
lemmy-ui:
#JatNote: Again, ARM based image
image: dessalines/lemmy-ui:0.17.3-linux-arm64
hostname: lemmy-ui
networks:
- lemmyinternal
environment:
# this needs to match the hostname defined in the lemmy service
- LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
# set the outside hostname here
- LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.bulwarkob.com:1236
- LEMMY_HTTPS=true
depends_on:
- lemmy
restart: always
pictrs:
image: asonix/pictrs
# this needs to match the pictrs url in lemmy.hjson
hostname: pictrs
networks:
- lemmyinternal
environment:
- PICTRS__API_KEY=API_KEY
user: 991:991
volumes:
- ./volumes/pictrs:/mnt
restart: always
postgres:
image: postgres:15-alpine
# this needs to match the database host in lemmy.hson
hostname: postgres
networks:
- lemmyinternal
environment:
- POSTGRES_USER=AUser
- POSTGRES_PASSWORD=APassword
- POSTGRES_DB=lemmy
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
restart: always
lemmy.hjson:
spoiler
{
# for more info about the config, check out the documentation
# https://join-lemmy.org/docs/en/administration/configuration.html
# only few config options are covered in this example config
setup: {
# username for the admin user
admin_username: "AUser"
# password for the admin user
admin_password: "APassword"
# name of the site (can be changed later)
site_name: "Bulwark of Boredom"
}
opentelemetry_url: "http://otel:4317"
# the domain name of your instance (eg "lemmy.ml")
hostname: "lemmy.bulwarkob.com"
# address where lemmy should listen for incoming requests
bind: "0.0.0.0"
# port where lemmy should listen for incoming requests
port: 8536
# Whether the site is available over TLS. Needs to be true for federation to work.
# JatNote: I was advised that this is not necessary. It does work without it.
# tls_enabled: true
# pictrs host
pictrs: {
url: "http://pictrs:8080/"
# api_key: "API_KEY"
}
# settings related to the postgresql database
database: {
# name of the postgres database for lemmy
database: "lemmy"
# username to connect to postgres
user: "aUser"
# password to connect to postgres
password: "aPassword"
# host where postgres is running
host: "postgres"
# port where postgres can be accessed
port: 5432
# maximum number of active sql connections
pool_size: 5
}
}
The following nginx.conf is for the internal proxy, which is included in the docker-compose.yml This is entirely separate from Nginx-Proxy-Manager (NPM)
nginx.conf:
spoiler
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream lemmy {
# this needs to map to the lemmy (server) docker service hostname
server "lemmy:8536";
}
upstream lemmy-ui {
# this needs to map to the lemmy-ui docker service hostname
server "lemmy-ui:1234";
}
server {
# this is the port inside docker, not the public one yet
listen 80;
# change if needed, this is facing the public web
server_name localhost;
server_tokens off;
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_vary on;
# Upload limit, relevant for pictrs
client_max_body_size 20M;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
# frontend general requests
location / {
# distinguish between ui requests and backend
# don't change lemmy-ui or lemmy here, they refer to the upstream definitions on top
set $proxpass "http://lemmy-ui";
if ($http_accept = "application/activity+json") {
set $proxpass "http://lemmy";
}
if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
set $proxpass "http://lemmy";
}
if ($request_method = POST) {
set $proxpass "http://lemmy";
}
proxy_pass $proxpass;
rewrite ^(.+)/+$ $1 permanent;
# Send actual client IP upstream
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# backend
location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
proxy_pass "http://lemmy";
# proxy common stuff
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Send actual client IP upstream
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
The nginx-proxy-manager container only needs to be in the same container network as the internal nginx:1-alpine container from the stack.
You need to create a proxy host for http port 80 to the IP address of the internal nginx:1-alpine container on the lemmyexternalproxy network in docker. Include the websockets support option.
https://lemmy.bulwarkob.com/pictrs/image/55870601-fb24-4346-8a42-bb14bb90d9e8.png
Then, you can use the SSL tab to do your cert and such. NPM is free to work on other networks with other containers as well, as far as I know.
hey @Jattatak, you seem to be the only other person I can find who is facing similar troubles to myself when trying to set up a lemmy instance. I've redone my docker-compose, nginx.conf, and lemmy.hjson to be exactly the same as yours (with some changes in the password / domain name). I'm also running an nginx proxy manager container.
However, it seems I'm still having the same problem of being able to see post content but not comments in other instances. I have the added problem of when trying to post a comment on my instance, the form freezes until I refresh the page. The comment does actually get posted.
I've also made sure the 'lemmyinternal' network is not isolated. I wonder did you manage to do anything to trouble shoot this issue? are there any ports I need to open on my firewall beyond 80, 443?
How did you setup your NGINX proxy? Can you post your NGINX config file as well as your docker-compose.yml file?
Yea Ill do a write up in a bit with everything I can share that helped. Ill post it under the original thread. Not sure if I can sticky things but Ill try.
Just posted
hi, can you post your docker-compose.yaml, nginx config and screenshots/logs of failures?
(2/2) I am able to connect to my lemmy UI site and navigate. When I try to view "Communities", I get stuck with just a spinning wheel, and never get results. Same for trying to login or create an account. I have checked logs for all the containers, and so far the only error is on the lemmy_lemmy_1 container, which is the following (Error is the last entry, labelled DEBUG):
spoiler
2023-06-07T12:08:27.532769Z INFO lemmy_db_schema::utils: Running Database migrations (This may take a long time)...
2023-06-07T12:08:27.538649Z INFO lemmy_db_schema::utils: Database migrations complete.
2023-06-07T12:08:27.551889Z INFO lemmy_server::code_migrations: Running user_updates_2020_04_02
2023-06-07T12:08:27.555253Z INFO lemmy_server::code_migrations: 0 person rows updated.
2023-06-07T12:08:27.555611Z INFO lemmy_server::code_migrations: Running community_updates_2020_04_02
2023-06-07T12:08:27.557394Z INFO lemmy_server::code_migrations: 0 community rows updated.
2023-06-07T12:08:27.557720Z INFO lemmy_server::code_migrations: Running post_updates_2020_04_03
2023-06-07T12:08:27.559061Z INFO lemmy_server::code_migrations: 0 post rows updated.
2023-06-07T12:08:27.559241Z INFO lemmy_server::code_migrations: Running comment_updates_2020_04_03
2023-06-07T12:08:27.562723Z INFO lemmy_server::code_migrations: 0 comment rows updated.
2023-06-07T12:08:27.562972Z INFO lemmy_server::code_migrations: Running private_message_updates_2020_05_05
2023-06-07T12:08:27.563976Z INFO lemmy_server::code_migrations: 0 private message rows updated.
2023-06-07T12:08:27.564197Z INFO lemmy_server::code_migrations: Running post_thumbnail_url_updates_2020_07_27
2023-06-07T12:08:27.565019Z INFO lemmy_server::code_migrations: 0 Post thumbnail_url rows updated.
2023-06-07T12:08:27.565194Z INFO lemmy_server::code_migrations: Running apub_columns_2021_02_02
2023-06-07T12:08:27.566016Z INFO lemmy_server::code_migrations: Running instance_actor_2021_09_29
2023-06-07T12:08:27.572498Z INFO lemmy_server::code_migrations: Running regenerate_public_keys_2022_07_05
2023-06-07T12:08:27.573440Z INFO lemmy_server::code_migrations: Running initialize_local_site_2022_10_10
federation enabled, host is lemmy.bulwarkob.com
Starting http server at 0.0.0.0:8536
2023-06-07T12:08:27.605442Z INFO lemmy_server::scheduled_tasks: Updating active site and community aggregates ...
2023-06-07T12:08:27.624775Z INFO lemmy_server::scheduled_tasks: Done.
2023-06-07T12:08:27.624792Z INFO lemmy_server::scheduled_tasks: Updating banned column if it expires ...
2023-06-07T12:08:27.625406Z INFO lemmy_server::scheduled_tasks: Reindexing table concurrently post_aggregates ...
2023-06-07T12:08:27.705125Z INFO lemmy_server::scheduled_tasks: Done.
2023-06-07T12:08:27.705146Z INFO lemmy_server::scheduled_tasks: Reindexing table concurrently comment_aggregates ...
2023-06-07T12:08:27.728006Z INFO lemmy_server::scheduled_tasks: Done.
2023-06-07T12:08:27.728027Z INFO lemmy_server::scheduled_tasks: Reindexing table concurrently community_aggregates ...
2023-06-07T12:08:27.756015Z INFO lemmy_server::scheduled_tasks: Done.
2023-06-07T12:08:27.756146Z INFO lemmy_server::scheduled_tasks: Clearing old activities...
2023-06-07T12:08:27.757461Z INFO lemmy_server::scheduled_tasks: Done.
2023-06-07T12:11:53.210731Z DEBUG HTTP request{http.method=GET http.scheme="http" http.host=lemmy.bulwarkob.com http.target=/api/v3/post/list otel.kind="server" request_id=254f6c39-9146-42e9-9353-06c0e6c1cea4}:perform{self=GetPosts { type_: Some(Local), sort: Some(Active), page: Some(1), limit: Some(40), community_id: None, community_name: None, saved_only: None, auth: None }}: lemmy_db_views::post_view: Post View Query: Query { sql: "SELECT \"post\".\"id\", \"post\".\"name\", \"post\".\"url\", \"post\".\"body\", \"post\".\"creator_id\", \"post\".\"community_id\", \"post\".\"removed\", \"post\".\"locked\", \"post\".\"published\", \"post\".\"updated\", \"post\".\"deleted\", \"post\".\"nsfw\", \"post\".\"embed_title\", \"post\".\"embed_description\", \"post\".\"embed_video_url\", \"post\".\"thumbnail_url\", \"post\".\"ap_id\", \"post\".\"local\", \"post\".\"language_id\", \"post\".\"featured_community\", \"post\".\"featured_local\", \"person\".\"id\", \"person\".\"name\", \"person\".\"display_name\", \"person\".\"avatar\", \"person\".\"banned\", \"person\".\"published\", \"person\".\"updated\", \"person\".\"actor_id\", \"person\".\"bio\", \"person\".\"local\", \"person\".\"banner\", \"person\".\"deleted\", \"person\".\"inbox_url\", \"person\".\"shared_inbox_url\", \"person\".\"matrix_user_id\", \"person\".\"admin\", \"person\".\"bot_account\", \"person\".\"ban_expires\", \"person\".\"instance_id\", \"community\".\"id\", \"community\".\"name\", \"community\".\"title\", \"community\".\"description\", \"community\".\"removed\", \"community\".\"published\", \"community\".\"updated\", \"community\".\"deleted\", \"community\".\"nsfw\", \"community\".\"actor_id\", \"community\".\"local\", \"community\".\"icon\", \"community\".\"banner\", \"community\".\"hidden\", \"community\".\"posting_restricted_to_mods\", \"community\".\"instance_id\", \"community_person_ban\".\"id\", \"community_person_ban\".\"community_id\", \"community_person_ban\".\"person_id\", \"community_person_ban\".\"published\", \"community_person_ban\".\"expires\", \"post_aggregates\".\"id\", \"post_aggregates\".\"post_id\", \"post_aggregates\".\"comments\", \"post_aggregates\".\"score\", \"post_aggregates\".\"upvotes\", \"post_aggregates\".\"downvotes\", \"post_aggregates\".\"published\", \"post_aggregates\".\"newest_comment_time_necro\", \"post_aggregates\".\"newest_comment_time\", \"post_aggregates\".\"featured_community\", \"post_aggregates\".\"featured_local\", \"community_follower\".\"id\", \"community_follower\".\"community_id\", \"community_follower\".\"person_id\", \"community_follower\".\"published\", \"community_follower\".\"pending\", \"post_saved\".\"id\", \"post_saved\".\"post_id\", \"post_saved\".\"person_id\", \"post_saved\".\"published\", \"post_read\".\"id\", \"post_read\".\"post_id\", \"post_read\".\"person_id\", \"post_read\".\"published\", \"person_block\".\"id\", \"person_block\".\"person_id\", \"person_block\".\"target_id\", \"person_block\".\"published\", \"post_like\".\"score\", coalesce((\"post_aggregates\".\"comments\" - \"person_post_aggregates\".\"read_comments\"), \"post_aggregates\".\"comments\") FROM ((((((((((((\"post\" INNER JOIN \"person\" ON (\"post\".\"creator_id\" = \"person\".\"id\")) INNER JOIN \"community\" ON (\"post\".\"community_id\" = \"community\".\"id\")) LEFT OUTER JOIN \"community_person_ban\" ON (((\"post\".\"community_id\" = \"community_person_ban\".\"community_id\") AND (\"community_person_ban\".\"person_id\" = \"post\".\"creator_id\")) AND ((\"community_person_ban\".\"expires\" IS NULL) OR (\"community_person_ban\".\"expires\" > CURRENT_TIMESTAMP)))) INNER JOIN \"post_aggregates\" ON (\"post_aggregates\".\"post_id\" = \"post\".\"id\")) LEFT OUTER JOIN \"community_follower\" ON ((\"post\".\"community_id\" = \"community_follower\".\"community_id\") AND (\"community_follower\".\"person_id\" = $1))) LEFT OUTER JOIN \"post_saved\" ON ((\"post\".\"id\" = \"post_saved\".\"post_id\") AND (\"post_saved\".\"person_id\" = $2))) LEFT OUTER JOIN \"post_read\" ON ((\"post\".\"id\" = \"post_read\".\"post_id\") AND (\"post_read\".\"person_id\" = $3))) LEFT OUTER JOIN \"person_block\" ON ((\"post\".\"creator_id\" = \"person_block\".\"target_id\") AND (\"person_block\".\"person_id\" = $4))) LEFT OUTER JOIN \"community_block\" ON ((\"community\".\"id\" = \"community_block\".\"community_id\") AND (\"community_block\".\"person_id\" = $5))) LEFT OUTER JOIN \"post_like\" ON ((\"post\".\"id\" = \"post_like\".\"post_id\") AND (\"post_like\".\"person_id\" = $6))) LEFT OUTER JOIN \"person_post_aggregates\" ON ((\"post\".\"id\" = \"person_post_aggregates\".\"post_id\") AND (\"person_post_aggregates\".\"person_id\" = $7))) LEFT OUTER JOIN \"local_user_language\" ON ((\"post\".\"language_id\" = \"local_user_language\".\"language_id\") AND (\"local_user_language\".\"local_user_id\" = $8))) WHERE ((((((((\"community\".\"local\" = $9) AND ((\"community\".\"hidden\" = $10) OR (\"community_follower\".\"person_id\" = $11))) AND (\"post\".\"nsfw\" = $12)) AND (\"community\".\"nsfw\" = $13)) AND (\"post\".\"removed\" = $14)) AND (\"post\".\"deleted\" = $15)) AND (\"community\".\"removed\" = $16)) AND (\"community\".\"deleted\" = $17)) ORDER BY \"post_aggregates\".\"featured_local\" DESC , hot_rank(\"post_aggregates\".\"score\", \"post_aggregates\".\"newest_comment_time_necro\") DESC , \"post_aggregates\".\"newest_comment_time_necro\" DESC LIMIT $18 OFFSET $19", binds: [-1, -1, -1, -1, -1, -1, -1, -1, true, false, -1, false, false, false, false, false, false, 40, 0] }
(1/2) Alright, thanks for helping.
docker-compose.yml
spoiler
version: "3.3"
networks:
# communication to web and clients
lemmyexternalproxy:
# communication between lemmy services
lemmyinternal:
driver: bridge
internal: true
services:
lemmy:
image: dessalines/lemmy
# this hostname is used in nginx reverse proxy and also for lemmy ui to connect to the backend, do not change
hostname: lemmy
networks:
- lemmyinternal
restart: always
environment:
- RUST_LOG="warn,lemmy_server=debug,lemmy_api=debug,lemmy_api_common=debug,lemmy_api_crud=debug,lemmy_apub=debug,lemmy_db_schema=debug,lemmy_db_views=debug,lemmy_db_views_actor=debug,lemmy_db_views_moderator=debug,lemmy_routes=debug,lemmy_utils=debug,lemmy_websocket=debug"
- RUST_BACKTRACE=full
volumes:
- ./lemmy.hjson:/config/config.hjson:Z
depends_on:
- postgres
- pictrs
lemmy-ui:
image: dessalines/lemmy-ui
# use this to build your local lemmy ui image for development
# run docker compose up --build
# assuming lemmy-ui is cloned besides lemmy directory
# build:
# context: ../../lemmy-ui
# dockerfile: dev.dockerfile
networks:
- lemmyinternal
environment:
# this needs to match the hostname defined in the lemmy service
- LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
# set the outside hostname here
- LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.bulwarkob.com:1236
- LEMMY_HTTPS=false
- LEMMY_UI_DEBUG=true
depends_on:
- lemmy
restart: always
pictrs:
image: asonix/pictrs:0.4.0-beta.19
# this needs to match the pictrs url in lemmy.hjson
hostname: pictrs
# we can set options to pictrs like this, here we set max. image size and forced format for conversion
# entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp
networks:
- lemmyinternal
environment:
- PICTRS_OPENTELEMETRY_URL=http://otel:4137
- PICTRS__API_KEY=API_KEY
- RUST_LOG=debug
- RUST_BACKTRACE=full
- PICTRS__MEDIA__VIDEO_CODEC=vp9
- PICTRS__MEDIA__GIF__MAX_WIDTH=256
- PICTRS__MEDIA__GIF__MAX_HEIGHT=256
- PICTRS__MEDIA__GIF__MAX_AREA=65536
- PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
user: 991:991
volumes:
- ./volumes/pictrs:/mnt:Z
restart: always
postgres:
image: postgres:15-alpine
# this needs to match the database host in lemmy.hson
# Tune your settings via
# https://pgtune.leopard.in.ua/#/
# You can use this technique to add them here
# https://stackoverflow.com/a/30850095/1655478
hostname: postgres
command:
[
"postgres",
"-c",
"session_preload_libraries=auto_explain",
"-c",
"auto_explain.log_min_duration=5ms",
"-c",
"auto_explain.log_analyze=true",
"-c",
"track_activity_query_size=1048576",
]
networks:
- lemmyinternal
# adding the external facing network to allow direct db access for devs
- lemmyexternalproxy
ports:
# use a different port so it doesnt conflict with potential postgres db running on the host
- "5433:5432"
environment:
- POSTGRES_USER=noUsrHere
- POSTGRES_PASSWORD=noPassHere
- POSTGRES_DB=noDbHere
volumes:
- ./volumes/postgres:/var/lib/postgresql/data:Z
restart: always
The NGINX I am using is not the one that came with the stack, but a separate single container for nginx-proxy-manager. I did not customize the conf that it installed with, and only used the UI to set up the proxy host and SSL, both of which are working (front end, at least.). The config seems to be unrelated on this, however I can share it if the rest of the information below is not enough.
nginx config and lemmy.hjson would be useful as well
Sure thing. lemmy.hjson:
spoiler
{
# for more info about the config, check out the documentation
# https://join-lemmy.org/docs/en/administration/configuration.html
# only few config options are covered in this example config
setup: {
# username for the admin user
admin_username: "noUsrHere"
# password for the admin user
admin_password: "noPassHere"
# name of the site (can be changed later)
site_name: "Bulwark of Boredom"
}
# the domain name of your instance (eg "lemmy.ml")
hostname: "lemmy.bulwarkob.com"
# address where lemmy should listen for incoming requests
bind: "0.0.0.0"
# port where lemmy should listen for incoming requests
port: 8536
# Whether the site is available over TLS. Needs to be true for federation to work.
tls_enabled: true
# pictrs host
pictrs: {
url: "http://pictrs:8080/"
api_key: "API_KEY"
}
# settings related to the postgresql database
database: {
# name of the postgres database for lemmy
database: "noDbHere"
# username to connect to postgres
user: "noUsrHere"
# password to connect to postgres
password: "noPassHere"
# host where postgres is running
host: "postgres"
# port where postgres can be accessed
port: 5432
# maximum number of active sql connections
pool_size: 5
}
}
I am not certain if I am somehow getting the wrong location of the config in the container. There is no volume or link for a conf file from host:container, so I am just grabbing from the default area /etc /nginx/nginx.conf:
spoiler
# run nginx in foreground
daemon off;
pid /run/nginx/nginx.pid;
user npm;
# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;
# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;
error_log /data/logs/fallback_error.log warn;
# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;
events {
include /data/nginx/custom/events[.]conf;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
client_body_temp_path /tmp/nginx/body 1 2;
keepalive_timeout 90s;
proxy_connect_timeout 90s;
proxy_send_timeout 90s;
proxy_read_timeout 90s;
ssl_prefer_server_ciphers on;
gzip on;
proxy_ignore_client_abort off;
client_max_body_size 2000m;
server_names_hash_bucket_size 1024;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
proxy_cache off;
proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m;
proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;
log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"';
log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"';
access_log /data/logs/fallback_access.log proxy;
# Dynamically generated resolvers file
include /etc/nginx/conf.d/include/resolvers.conf;
# Default upstream scheme
map $host $forward_scheme {
default http;
}
# Real IP Determination
# Local subnets:
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12; # Includes Docker subnet
set_real_ip_from 192.168.0.0/16;
# NPM generated CDN ip ranges:
include conf.d/include/ip_ranges.conf;
# always put the following 2 lines after ip subnets:
real_ip_header X-Real-IP;
real_ip_recursive on;
# Custom
include /data/nginx/custom/http_top[.]conf;
# Files generated by NPM
include /etc/nginx/conf.d/*.conf;
include /data/nginx/default_host/*.conf;
include /data/nginx/proxy_host/*.conf;
include /data/nginx/redirection_host/*.conf;
include /data/nginx/dead_host/*.conf;
include /data/nginx/temp/*.conf;
# Custom
include /data/nginx/custom/http[.]conf;
}
stream {
# Files generated by NPM
include /data/nginx/stream/*.conf;
# Custom
include /data/nginx/custom/stream[.]conf;
}
# Custom
include /data/nginx/custom/root[.]conf;
it seems there is no config for lemmy nginx here.. might be in other files?
I may be mistaken in my choice of proceeding, but as many are reporting, the install guide provided docker-compose and general docker instructions dont quite seem to work as expected. I have been trying to piecemeal this together, and the Included lemmy nginx service container was completely excluded (edited out/deleted) once I had the standalone nginx-proxy-manager setup and working for regular 80,443 ->1234 proxy requests to the lemmy-ui container.
Does the lemmy nginx have a specific role or tie in? I am still fairly new to reverse proxying in general.
yeah, nginx config for lemmy is not very straighforward. you need to mimic this:
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream lemmy {
server "lemmy:8536";
}
upstream lemmy-ui {
server "lemmy-ui:1234";
}
server {
listen 1236;
server_name localhost;
# frontend
location / {
set $proxpass "http://lemmy-ui";
if ($http_accept = "application/activity+json") {
set $proxpass "http://lemmy";
}
if ($http_accept = "application/ldr+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
set $proxpass "http://lemmy";
}
if ($request_method = POST) {
set $proxpass "http://lemmy";
}
proxy_pass $proxpass;
rewrite ^(.+)/+$ $1 permanent;
# Send actual client IP upstream
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# backend
location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
proxy_pass "http://lemmy";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Add IP forwarding headers
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
also - can you check if all containers are running? just do docker-compose ps
in the lemmy dir.
All containers are running. I handle them with Portainer, though I build the stack from the CLI in the lemmy dir, so Portainer cant fully manage them. Reboots and logs and networking and such work fine though.
As for the nginx config, the nginx proxy manager I use currently has all proxy-host/settings setup from the webGUI, where I use the GUI to set up the proxy host information and SSL information. I did no manual edits to any configurations or settings of the container during or after compose. Only GUI actions. When looking at the nginx.conf I replied with here (my current conf), I do not see anything related to that proxy host I created from the GUI. I am not sure if that is normal or not, or if I maybe have a wrong .conf included here.
With that in mind, would you suggest I simply overwrite and/or add your snippet to my existing conf file?
try to look here for the config file:
include /etc/nginx/conf.d/*.conf;
include /data/nginx/default_host/*.conf;
include /data/nginx/proxy_host/*.conf;
include /data/nginx/redirection_host/*.conf;
include /data/nginx/dead_host/*.conf;
include /data/nginx/temp/*.conf;
btw, i think port in lemmy.bulwarkob.com:1236 in docker-compose is not needed for you, should be just lemmy.bulwarkob.com
I appreciate your patience and clear assistance.
conf.d/* has two configurations that appear to be some form of default. default.conf and production.conf. production.conf is only for the admin GUI. default.conf:
Container has a volume set /lemmy/docker/nginx-proxy-manager/data:/data
I have those folders and more, and they DO seem to have the correct custom item.
Specifically, in the proxy_host folder I have a configuration for the proxy host I set up (1.conf) in the GUI:
spoiler
# ------------------------------------------------------------
# lemmy.bulwarkob.com
# ------------------------------------------------------------
server {
set $forward_scheme http;
set $server "172.24.0.5";
set $port 1234;
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name lemmy.bulwarkob.com;
# Let's Encrypt SSL
include conf.d/include/letsencrypt-acme-challenge.conf;
include conf.d/include/ssl-ciphers.conf;
ssl_certificate /etc/letsencrypt/live/npm-1/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/npm-1/privkey.pem;
# Block Exploits
include conf.d/include/block-exploits.conf;
# Force SSL
include conf.d/include/force-ssl.conf;
access_log /data/logs/proxy-host-1_access.log proxy;
error_log /data/logs/proxy-host-1_error.log warn;
location / {
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
The rest of the folders are empty:
okay, i don't know how npm works, could you check this tutorial to see if you have set it up similarly?
also - check docker-compose.yml settings to remove port for lemmy host and i think you need to set use https to true as it is provided by npm
I actually started with this tutorial a few days ago after failing the official guide. I followed it but was unable to get it running due to unexpected errors. Im guessing this tutorial is somewhat out of date. Ive made progress since using that guide though so I will see if I can pull any useful bits out of it later today and continue.
Worst case, I could also just ditch NPM if I can get another NGINX set up in a way that you might know how to do correctly.
Hey, if you still feel like helping out :D
Ive been through a boatload of changes today since earlier. Ive rebuilt using mostly the provided yml in the official guide, and after some tweaking, almost everything is working. The internal proxy is now working, and the containers are working amongst themselves fully as far as I can tell. I do not know how to setup a web facing reverse proxy in a way that works around the internal proxy already running (other than the already in place NPM). I turned the NPM back on, and was able to get it working to reach the site, however I cannot reach any other communities from within my site. I believe the reverse proxy NPM is just not set up right. Error message in lemmy:
spoiler
ERROR HTTP request{http.method=GET http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/api/v3/ws otel.kind="server" request_id=69004ca6-7967-48c3-a4d2-583e961e34d3 http.status_code=101 otel.status_code="OK"}: lemmy_server::api_routes_websocket: couldnt_find_object: Request error: error sending request for url (https://midwest.social/.well-known/webfinger?resource=acct:projectzomboid@midwest.social): operation timed out
0: lemmy_apub::fetcher::search::search_query_to_object_id
at crates/apub/src/fetcher/search.rs:17
1: lemmy_apub::api::resolve_object::perform
with self=ResolveObject { q: "[!projectzomboid@midwest.social](/c/projectzomboid@midwest.social)", auth: Some(Sensitive) }
at crates/apub/src/api/resolve_object.rs:21
2: lemmy_server::root_span_builder::HTTP request
with http.method=GET http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/api/v3/ws otel.kind="server" request_id=69004ca6-7967-48c3-a4d2-583e961e34d3 http.status_code=101 otel.status_code="OK"
at src/root_span_builder.rs:16
I would be happy to remove NPM from this stack if its not too difficult to get a correctly working reverse proxy set up. The documentation doesnt give much to work with in it.
from the log it seems that lemmy cannot reach https://midwest.social/ - if you have more such operation timed outs - probably there is some networking issue with outgoing requests - maybe you have some kind of firewall? i can reach your instance from other direction: https://group.lt/c/bulwarkob@lemmy.bulwarkob.com
probably the easiest way to setup lemmy and another front facing reverse proxy is to use nginx that comes with lemmy on another port and setup simple reverse proxying with NPM to it. i myself using caddy for reverse proxying, using this config: https://join-lemmy.org/docs/en/administration/caddy.html
I see that the instance can be reached, and posts are shown, however comments are not. I have found in the official docs that there is a config snippet for a web facing reverse proxy. https://join-lemmy.org/docs/en/administration/troubleshooting.html https://github.com/LemmyNet/lemmy-ansible/blob/main/templates/nginx.conf
And this config appears quite different from the "Install with Docker" config instructions: https://join-lemmy.org/docs/en/administration/install_docker.html
spoiler
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream lemmy {
# this needs to map to the lemmy (server) docker service hostname
server "lemmy:8536";
}
upstream lemmy-ui {
# this needs to map to the lemmy-ui docker service hostname
server "lemmy-ui:1234";
}
server {
# this is the port inside docker, not the public one yet
listen 80;
# change if needed, this is facing the public web
server_name localhost;
server_tokens off;
gzip on;
gzip_types text/css application/javascript image/svg+xml;
gzip_vary on;
# Upload limit, relevant for pictrs
client_max_body_size 20M;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
# frontend general requests
location / {
# distinguish between ui requests and backend
# don't change lemmy-ui or lemmy here, they refer to the upstream definitions on top
set $proxpass "http://lemmy-ui";
if ($http_accept = "application/activity+json") {
set $proxpass "http://lemmy";
}
if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
set $proxpass "http://lemmy";
}
if ($request_method = POST) {
set $proxpass "http://lemmy";
}
proxy_pass $proxpass;
rewrite ^(.+)/+$ $1 permanent;
# Send actual client IP upstream
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
# backend
location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
proxy_pass "http://lemmy";
# proxy common stuff
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Send actual client IP upstream
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Do you know if I should expect to have TWO unique NGINX proxy instances (assuming I use NGINX)? One in-stack, and one separate for web facing reverse proxy? Or do I need a combination of the two configs into one instance?
I am going to see if I can get a caddy reverse proxy setup in the meantime and see how it performs given your configuration there.
you can have two nginx proxy instances, one as a front (serving other sites besides lemmy instance) and another - coupled with lemmy instance. in such case the first one can be configured minimally with basic proxy stuff to internal lemmy one, no need for this fancy lemmy and lemmy-ui proxying.
location /{
proxy_pass http://nginx-lemmy-docker:someport;
}
I believe I have the simple set up for the NPM reverse proxy. Just as you say, it points to the docker address of the lemmyInstance NGINX. I can get to my instance with HTTPS secured and good. Just getting errors when communicating to other instances. I can only imagine it is web socket related, but I am not familiar enough to look at the proxy configs and determine what might be wrong with it unfortunately. I might need to try and find someone to essentially look through it with me in real time.
spoiler
ERROR HTTP request{http.method=GET http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/api/v3/ws otel.kind="server" request_id=67d75886-bf48-4444-a435-d98d8fc1e303 http.status_code=101 otel.status_code="OK"}: lemmy_server::api_routes_websocket: couldnt_find_object: Request error: error sending request for url (https://lemmy.ml/.well-known/webfinger?resource=acct:asklemmy@lemmy.ml): operation timed out
0: lemmy_apub::fetcher::search::search_query_to_object_id
at crates/apub/src/fetcher/search.rs:17
1: lemmy_apub::api::resolve_object::perform
with self=ResolveObject { q: "[!asklemmy@lemmy.ml](/c/asklemmy@lemmy.ml)", auth: Some(Sensitive) }
at crates/apub/src/api/resolve_object.rs:21
2: lemmy_server::root_span_builder::HTTP request
with http.method=GET http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/api/v3/ws otel.kind="server" request_id=67d75886-bf48-4444-a435-d98d8fc1e303 http.status_code=101 otel.status_code="OK"
at src/root_span_builder.rs:16
spoiler
WARN Error encountered while processing the incoming HTTP request: lemmy_server::root_span_builder: Request error: error sending request for url (https://beehaw.org/u/Jattatak): operation timed out
0: lemmy_server::root_span_builder::HTTP request
with http.method=POST http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/inbox otel.kind="server" request_id=f413d3e5-262a-4dac-bc2e-700b9a053954 http.status_code=400 otel.status_code="OK"
at src/root_span_builder.rs:16
LemmyError { message: None, inner: Request error: error sending request for url (https://beehaw.org/u/Jattatak): operation timed out
Caused by:
0: error sending request for url (https://beehaw.org/u/Jattatak): operation timed out
1: operation timed out, context: "SpanTrace" }
from the logs it seems that lemmy docker does not communicate with outside servers.
also i have a bit different config for lemmy.hjson
{
# for more info about the config, check out the documentation
# https://join-lemmy.org/docs/en/administration/configuration.html
setup: {
# username for the admin user
admin_username: "adminuser"
# password for the admin user
admin_password: "adminpassword"
# name of the site (can be changed later)
site_name: "group.lt"
}
opentelemetry_url: "http://otel:4317"
# the domain name of your instance (eg "lemmy.ml")
hostname: "group.lt"
# address where lemmy should listen for incoming requests
bind: "0.0.0.0"
# port where lemmy should listen for incoming requests
port: 8536
# settings related to the postgresql database
# address where pictrs is available
pictrs: {
url: "http://pictrs:8080/"
# api_key: "API_KEY"
}
database: {
# name of the postgres database for lemmy
database: "lemmy"
# username to connect to postgres
user: "lemmy"
# password to connect to postgres
password: "lemmy"
# host where postgres is running
host: "postgres"
# port where postgres can be accessed
port: 5432
# maximum number of active sql connections
pool_size: 5
}
# # optional: email sending configuration
email: {
# # hostname and port of the smtp server
smtp_server: "postfix:25"
smtp_from_address: "from@group.lt"
tls_type: false
}
}
also check in admin interface if federation is enabled and you do not blacklist instances
(https://lemmy.bulwarkob.com/admin) and maybe you can try to enable federation debug mode for awhile
The differences I see are the otel link, and the TLS setting:
# Whether the site is available over TLS. Needs to be true for federation to work.
tls_enabled: true
I see you dont have it on there, which I would assume means you cant be federated? I have added the otel link and enabled the debug mode. Federation is already enabled and the instance is set to "ALL". Still no luck on this end. Same status, except now im not getting any log errors in the container logs (Viewed from Portainer).
Including this in case it is a possible issue: federation enabled, host is lemmy.bulwarkob.com
Starting http server at 0.0.0.0:8536
also pictrs: { url: "http://pictrs:8080/" # api_key: "API_KEY" }
about tls setting - don't remember why i have removed it, but group.lt federates fine. not sure about what you mean instance set to ALL.
what about network isolation in portainer? maybe it is on?
I see my Pictrs appears to be the same as what you had sent over. Protainer network isolation does not appear to be in place. All are bridged networks, and I would assume access issues would be more encompassing if that were a direct correlation to the issue. Im still betting on User Error for configuration so far. Being myself, of course.
well probably you are right about the user error, but from the logs it seems that it cannot reach other instances - can you enter the shell of the container and check if you are able to ping/curl https://group.lt for example? and network isolation is a checkbox in portainer, according to docs.
for the federation itself i have also experienced it not working, when my nginx config was pointing wrongly to lemmy and lemmy-ui depending on the headers.
as i have said before - i can reach your instance from my lemmy, but don't receive anything back.
It would seem it was called "Internal" as opposed to isolated on my Portainer. That appears to have been it though. I can get to other communities now. Still having disparity with posts and comments showing up, but Im hoping that will be something to update in time.
fantastic ;)
I will endeavour to pay it forward by helping others if I can.
Lemmy Support
Support / questions about Lemmy.