[-] Jattatak@beehaw.org 3 points 1 year ago

Working Setup files, for my ARM64 Ubuntu host server. The postgres, lemmy, lemmy-ui, and pictrs containers all are on the lemmyinternal network only. The nginx:1-alpine container is in both networks. docker-compose.yml

spoiler


version: "3.3"
# JatNote = Note from Jattatak for working YML at this time (Jun8,2023)
networks:
  # communication to web and clients
  lemmyexternalproxy:
  # communication between lemmy services
  lemmyinternal:
    driver: bridge
    #JatNote: The Internal mode for this network is in the official doc, but is what broke my setup
    # I left it out to fix it. I advise the same.
#    internal: true

services:
  proxy:
    image: nginx:1-alpine
    networks:
      - lemmyinternal
      - lemmyexternalproxy
    ports:
      # only ports facing any connection from outside
      # JatNote: Ports mapped to nonsense to prevent colision with NGINX Proxy Manager
      - 680:80
      - 6443:443
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro
      # setup your certbot and letsencrypt config 
      - ./certbot:/var/www/certbot
      - ./letsencrypt:/etc/letsencrypt/live
    restart: always
    depends_on:
      - pictrs
      - lemmy-ui

  lemmy:
  #JatNote: I am running on an ARM Ubuntu Virtual Server. Therefore, this is my image. I suggest using matching lemmy/lemmy-ui versions.
    image: dessalines/lemmy:0.17.3-linux-arm64
    hostname: lemmy
    networks:
      - lemmyinternal
    restart: always
    environment:
      - RUST_LOG="warn,lemmy_server=info,lemmy_api=info,lemmy_api_common=info,lemmy_api_crud=info,lemmy_apub=info,lemmy_db_schema=info,lemmy_db_views=info,lemmy_db_views_actor=info,lemmy_db_views_moderator=info,lemmy_routes=info,lemmy_utils=info,lemmy_websocket=info"
    volumes:
      - ./lemmy.hjson:/config/config.hjson
    depends_on:
      - postgres
      - pictrs

  lemmy-ui:
  #JatNote: Again, ARM based image
    image: dessalines/lemmy-ui:0.17.3-linux-arm64
    hostname: lemmy-ui
    networks:
      - lemmyinternal
    environment:
      # this needs to match the hostname defined in the lemmy service
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
      # set the outside hostname here
      - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.bulwarkob.com:1236
      - LEMMY_HTTPS=true
    depends_on:
      - lemmy
    restart: always

  pictrs:
    image: asonix/pictrs
    # this needs to match the pictrs url in lemmy.hjson
    hostname: pictrs
    networks:
      - lemmyinternal
    environment:
      - PICTRS__API_KEY=API_KEY
    user: 991:991
    volumes:
      - ./volumes/pictrs:/mnt
    restart: always

  postgres:
    image: postgres:15-alpine
    # this needs to match the database host in lemmy.hson
    hostname: postgres
    networks:
      - lemmyinternal
    environment:
      - POSTGRES_USER=AUser
      - POSTGRES_PASSWORD=APassword
      - POSTGRES_DB=lemmy
    volumes:
      - ./volumes/postgres:/var/lib/postgresql/data
    restart: always


lemmy.hjson:

spoiler

{
  # for more info about the config, check out the documentation
  # https://join-lemmy.org/docs/en/administration/configuration.html
  # only few config options are covered in this example config

  setup: {
    # username for the admin user
    admin_username: "AUser"
    # password for the admin user
    admin_password: "APassword"
    # name of the site (can be changed later)
    site_name: "Bulwark of Boredom"
  }

  opentelemetry_url: "http://otel:4317"

  # the domain name of your instance (eg "lemmy.ml")
  hostname: "lemmy.bulwarkob.com"
  # address where lemmy should listen for incoming requests
  bind: "0.0.0.0"
  # port where lemmy should listen for incoming requests
  port: 8536
  # Whether the site is available over TLS. Needs to be true for federation to work.
  # JatNote: I was advised that this is not necessary. It does work without it.
#  tls_enabled: true

  # pictrs host
  pictrs: {
    url: "http://pictrs:8080/"
  # api_key: "API_KEY"
  }

  # settings related to the postgresql database
  database: {
    # name of the postgres database for lemmy
    database: "lemmy"
    # username to connect to postgres
    user: "aUser"
    # password to connect to postgres
    password: "aPassword"
    # host where postgres is running
    host: "postgres"
    # port where postgres can be accessed
    port: 5432
    # maximum number of active sql connections
    pool_size: 5
  }
}

The following nginx.conf is for the internal proxy, which is included in the docker-compose.yml This is entirely separate from Nginx-Proxy-Manager (NPM)

nginx.conf:

spoiler

worker_processes 1;
events {
    worker_connections 1024;
}
http {
    upstream lemmy {
        # this needs to map to the lemmy (server) docker service hostname
        server "lemmy:8536";
    }
    upstream lemmy-ui {
        # this needs to map to the lemmy-ui docker service hostname
        server "lemmy-ui:1234";
    }

    server {
        # this is the port inside docker, not the public one yet
        listen 80;
        # change if needed, this is facing the public web
        server_name localhost;
        server_tokens off;

        gzip on;
        gzip_types text/css application/javascript image/svg+xml;
        gzip_vary on;

        # Upload limit, relevant for pictrs
        client_max_body_size 20M;

        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        # frontend general requests
        location / {
            # distinguish between ui requests and backend
            # don't change lemmy-ui or lemmy here, they refer to the upstream definitions on top
            set $proxpass "http://lemmy-ui";

            if ($http_accept = "application/activity+json") {
              set $proxpass "http://lemmy";
            }
            if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
              set $proxpass "http://lemmy";
            }
            if ($request_method = POST) {
              set $proxpass "http://lemmy";
            }
            proxy_pass $proxpass;

            rewrite ^(.+)/+$ $1 permanent;
            # Send actual client IP upstream
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        # backend
        location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
            proxy_pass "http://lemmy";
            # proxy common stuff
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";

            # Send actual client IP upstream
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}


The nginx-proxy-manager container only needs to be in the same container network as the internal nginx:1-alpine container from the stack.

You need to create a proxy host for http port 80 to the IP address of the internal nginx:1-alpine container on the lemmyexternalproxy network in docker. Include the websockets support option.

https://lemmy.bulwarkob.com/pictrs/image/55870601-fb24-4346-8a42-bb14bb90d9e8.png

Then, you can use the SSL tab to do your cert and such. NPM is free to work on other networks with other containers as well, as far as I know.

[-] Jattatak@beehaw.org 3 points 1 year ago

I will endeavour to pay it forward by helping others if I can.

[-] Jattatak@beehaw.org 2 points 1 year ago

I got mine all squared away (I hope), if you still need assistance.

[-] Jattatak@beehaw.org 2 points 1 year ago

It would seem it was called "Internal" as opposed to isolated on my Portainer. That appears to have been it though. I can get to other communities now. Still having disparity with posts and comments showing up, but Im hoping that will be something to update in time.

[-] Jattatak@beehaw.org 1 points 1 year ago

I see that the instance can be reached, and posts are shown, however comments are not. I have found in the official docs that there is a config snippet for a web facing reverse proxy. https://join-lemmy.org/docs/en/administration/troubleshooting.html https://github.com/LemmyNet/lemmy-ansible/blob/main/templates/nginx.conf

And this config appears quite different from the "Install with Docker" config instructions: https://join-lemmy.org/docs/en/administration/install_docker.html

spoiler


worker_processes 1;
events {
    worker_connections 1024;
}
http {
    upstream lemmy {
        # this needs to map to the lemmy (server) docker service hostname
        server "lemmy:8536";
    }
    upstream lemmy-ui {
        # this needs to map to the lemmy-ui docker service hostname
        server "lemmy-ui:1234";
    }

    server {
        # this is the port inside docker, not the public one yet
        listen 80;
        # change if needed, this is facing the public web
        server_name localhost;
        server_tokens off;

        gzip on;
        gzip_types text/css application/javascript image/svg+xml;
        gzip_vary on;

        # Upload limit, relevant for pictrs
        client_max_body_size 20M;

        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        # frontend general requests
        location / {
            # distinguish between ui requests and backend
            # don't change lemmy-ui or lemmy here, they refer to the upstream definitions on top
            set $proxpass "http://lemmy-ui";

            if ($http_accept = "application/activity+json") {
              set $proxpass "http://lemmy";
            }
            if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
              set $proxpass "http://lemmy";
            }
            if ($request_method = POST) {
              set $proxpass "http://lemmy";
            }
            proxy_pass $proxpass;

            rewrite ^(.+)/+$ $1 permanent;
            # Send actual client IP upstream
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        # backend
        location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
            proxy_pass "http://lemmy";
            # proxy common stuff
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";

            # Send actual client IP upstream
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

Do you know if I should expect to have TWO unique NGINX proxy instances (assuming I use NGINX)? One in-stack, and one separate for web facing reverse proxy? Or do I need a combination of the two configs into one instance?

I am going to see if I can get a caddy reverse proxy setup in the meantime and see how it performs given your configuration there.

[-] Jattatak@beehaw.org 1 points 1 year ago

Hey, if you still feel like helping out :D

Ive been through a boatload of changes today since earlier. Ive rebuilt using mostly the provided yml in the official guide, and after some tweaking, almost everything is working. The internal proxy is now working, and the containers are working amongst themselves fully as far as I can tell. I do not know how to setup a web facing reverse proxy in a way that works around the internal proxy already running (other than the already in place NPM). I turned the NPM back on, and was able to get it working to reach the site, however I cannot reach any other communities from within my site. I believe the reverse proxy NPM is just not set up right. Error message in lemmy:

spoiler

ERROR HTTP request{http.method=GET http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/api/v3/ws otel.kind="server" request_id=69004ca6-7967-48c3-a4d2-583e961e34d3 http.status_code=101 otel.status_code="OK"}: lemmy_server::api_routes_websocket: couldnt_find_object: Request error: error sending request for url (https://midwest.social/.well-known/webfinger?resource=acct:projectzomboid@midwest.social): operation timed out

   0: lemmy_apub::fetcher::search::search_query_to_object_id

             at crates/apub/src/fetcher/search.rs:17

   1: lemmy_apub::api::resolve_object::perform

           with self=ResolveObject { q: "[!projectzomboid@midwest.social](/c/projectzomboid@midwest.social)", auth: Some(Sensitive) }

             at crates/apub/src/api/resolve_object.rs:21

   2: lemmy_server::root_span_builder::HTTP request

           with http.method=GET http.scheme="https" http.host=lemmy.bulwarkob.com http.target=/api/v3/ws otel.kind="server" request_id=69004ca6-7967-48c3-a4d2-583e961e34d3 http.status_code=101 otel.status_code="OK"

             at src/root_span_builder.rs:16

I would be happy to remove NPM from this stack if its not too difficult to get a correctly working reverse proxy set up. The documentation doesnt give much to work with in it.

[-] Jattatak@beehaw.org 3 points 1 year ago

I actually started with this tutorial a few days ago after failing the official guide. I followed it but was unable to get it running due to unexpected errors. Im guessing this tutorial is somewhat out of date. Ive made progress since using that guide though so I will see if I can pull any useful bits out of it later today and continue.

Worst case, I could also just ditch NPM if I can get another NGINX set up in a way that you might know how to do correctly.

[-] Jattatak@beehaw.org 1 points 1 year ago

I appreciate your patience and clear assistance.

conf.d/* has two configurations that appear to be some form of default. default.conf and production.conf. production.conf is only for the admin GUI. default.conf:

Container has a volume set /lemmy/docker/nginx-proxy-manager/data:/data

I have those folders and more, and they DO seem to have the correct custom item.

Specifically, in the proxy_host folder I have a configuration for the proxy host I set up (1.conf) in the GUI:

spoiler


# ------------------------------------------------------------
# lemmy.bulwarkob.com
# ------------------------------------------------------------


server {
  set $forward_scheme http;
  set $server         "172.24.0.5";
  set $port           1234;

  listen 80;
listen [::]:80;

listen 443 ssl http2;
listen [::]:443 ssl http2;


  server_name lemmy.bulwarkob.com;


  # Let's Encrypt SSL
  include conf.d/include/letsencrypt-acme-challenge.conf;
  include conf.d/include/ssl-ciphers.conf;
  ssl_certificate /etc/letsencrypt/live/npm-1/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/npm-1/privkey.pem;



  # Block Exploits
  include conf.d/include/block-exploits.conf;




    # Force SSL
    include conf.d/include/force-ssl.conf;


  access_log /data/logs/proxy-host-1_access.log proxy;
  error_log /data/logs/proxy-host-1_error.log warn;


  location / {





    # Proxy!
    include conf.d/include/proxy.conf;
  }


  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}

The rest of the folders are empty:

[-] Jattatak@beehaw.org 1 points 1 year ago

Sure thing. lemmy.hjson:

spoiler

{
  # for more info about the config, check out the documentation
  # https://join-lemmy.org/docs/en/administration/configuration.html
  # only few config options are covered in this example config

  setup: {
    # username for the admin user
    admin_username: "noUsrHere"
    # password for the admin user
    admin_password: "noPassHere"
    # name of the site (can be changed later)
    site_name: "Bulwark of Boredom"
  }

  # the domain name of your instance (eg "lemmy.ml")
  hostname: "lemmy.bulwarkob.com"
  # address where lemmy should listen for incoming requests
  bind: "0.0.0.0"
  # port where lemmy should listen for incoming requests
  port: 8536
  # Whether the site is available over TLS. Needs to be true for federation to work.
  tls_enabled: true

  # pictrs host
  pictrs: {
    url: "http://pictrs:8080/"
    api_key: "API_KEY"
  }

  # settings related to the postgresql database
  database: {
    # name of the postgres database for lemmy
    database: "noDbHere"
    # username to connect to postgres
    user: "noUsrHere"
    # password to connect to postgres
    password: "noPassHere"
    # host where postgres is running
    host: "postgres"
    # port where postgres can be accessed
    port: 5432
    # maximum number of active sql connections
    pool_size: 5
  }
}

I am not certain if I am somehow getting the wrong location of the config in the container. There is no volume or link for a conf file from host:container, so I am just grabbing from the default area /etc /nginx/nginx.conf:

spoiler

# run nginx in foreground
daemon off;
pid /run/nginx/nginx.pid;
user npm;

# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;

# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;

error_log /data/logs/fallback_error.log warn;

# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;

events {
	include /data/nginx/custom/events[.]conf;
}

http {
	include                       /etc/nginx/mime.types;
	default_type                  application/octet-stream;
	sendfile                      on;
	server_tokens                 off;
	tcp_nopush                    on;
	tcp_nodelay                   on;
	client_body_temp_path         /tmp/nginx/body 1 2;
	keepalive_timeout             90s;
	proxy_connect_timeout         90s;
	proxy_send_timeout            90s;
	proxy_read_timeout            90s;
	ssl_prefer_server_ciphers     on;
	gzip                          on;
	proxy_ignore_client_abort     off;
	client_max_body_size          2000m;
	server_names_hash_bucket_size 1024;
	proxy_http_version            1.1;
	proxy_set_header              X-Forwarded-Scheme $scheme;
	proxy_set_header              X-Forwarded-For $proxy_add_x_forwarded_for;
	proxy_set_header              Accept-Encoding "";
	proxy_cache                   off;
	proxy_cache_path              /var/lib/nginx/cache/public  levels=1:2 keys_zone=public-cache:30m max_size=192m;
	proxy_cache_path              /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;

	log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"';
	log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"';

	access_log /data/logs/fallback_access.log proxy;

	# Dynamically generated resolvers file
	include /etc/nginx/conf.d/include/resolvers.conf;

	# Default upstream scheme
	map $host $forward_scheme {
		default http;
	}

	# Real IP Determination

	# Local subnets:
	set_real_ip_from 10.0.0.0/8;
	set_real_ip_from 172.16.0.0/12; # Includes Docker subnet
	set_real_ip_from 192.168.0.0/16;
	# NPM generated CDN ip ranges:
	include conf.d/include/ip_ranges.conf;
	# always put the following 2 lines after ip subnets:
	real_ip_header X-Real-IP;
	real_ip_recursive on;

	# Custom
	include /data/nginx/custom/http_top[.]conf;

	# Files generated by NPM
	include /etc/nginx/conf.d/*.conf;
	include /data/nginx/default_host/*.conf;
	include /data/nginx/proxy_host/*.conf;
	include /data/nginx/redirection_host/*.conf;
	include /data/nginx/dead_host/*.conf;
	include /data/nginx/temp/*.conf;

	# Custom
	include /data/nginx/custom/http[.]conf;
}

stream {
	# Files generated by NPM
	include /data/nginx/stream/*.conf;

	# Custom
	include /data/nginx/custom/stream[.]conf;
}

# Custom
include /data/nginx/custom/root[.]conf;


[-] Jattatak@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

(2/2) I am able to connect to my lemmy UI site and navigate. When I try to view "Communities", I get stuck with just a spinning wheel, and never get results. Same for trying to login or create an account. I have checked logs for all the containers, and so far the only error is on the lemmy_lemmy_1 container, which is the following (Error is the last entry, labelled DEBUG):

spoiler

2023-06-07T12:08:27.532769Z  INFO lemmy_db_schema::utils: Running Database migrations (This may take a long time)...

2023-06-07T12:08:27.538649Z  INFO lemmy_db_schema::utils: Database migrations complete.

2023-06-07T12:08:27.551889Z  INFO lemmy_server::code_migrations: Running user_updates_2020_04_02

2023-06-07T12:08:27.555253Z  INFO lemmy_server::code_migrations: 0 person rows updated.

2023-06-07T12:08:27.555611Z  INFO lemmy_server::code_migrations: Running community_updates_2020_04_02

2023-06-07T12:08:27.557394Z  INFO lemmy_server::code_migrations: 0 community rows updated.

2023-06-07T12:08:27.557720Z  INFO lemmy_server::code_migrations: Running post_updates_2020_04_03

2023-06-07T12:08:27.559061Z  INFO lemmy_server::code_migrations: 0 post rows updated.

2023-06-07T12:08:27.559241Z  INFO lemmy_server::code_migrations: Running comment_updates_2020_04_03

2023-06-07T12:08:27.562723Z  INFO lemmy_server::code_migrations: 0 comment rows updated.

2023-06-07T12:08:27.562972Z  INFO lemmy_server::code_migrations: Running private_message_updates_2020_05_05

2023-06-07T12:08:27.563976Z  INFO lemmy_server::code_migrations: 0 private message rows updated.

2023-06-07T12:08:27.564197Z  INFO lemmy_server::code_migrations: Running post_thumbnail_url_updates_2020_07_27

2023-06-07T12:08:27.565019Z  INFO lemmy_server::code_migrations: 0 Post thumbnail_url rows updated.

2023-06-07T12:08:27.565194Z  INFO lemmy_server::code_migrations: Running apub_columns_2021_02_02

2023-06-07T12:08:27.566016Z  INFO lemmy_server::code_migrations: Running instance_actor_2021_09_29

2023-06-07T12:08:27.572498Z  INFO lemmy_server::code_migrations: Running regenerate_public_keys_2022_07_05

2023-06-07T12:08:27.573440Z  INFO lemmy_server::code_migrations: Running initialize_local_site_2022_10_10

federation enabled, host is lemmy.bulwarkob.com

Starting http server at 0.0.0.0:8536

2023-06-07T12:08:27.605442Z  INFO lemmy_server::scheduled_tasks: Updating active site and community aggregates ...

2023-06-07T12:08:27.624775Z  INFO lemmy_server::scheduled_tasks: Done.

2023-06-07T12:08:27.624792Z  INFO lemmy_server::scheduled_tasks: Updating banned column if it expires ...

2023-06-07T12:08:27.625406Z  INFO lemmy_server::scheduled_tasks: Reindexing table concurrently post_aggregates ...

2023-06-07T12:08:27.705125Z  INFO lemmy_server::scheduled_tasks: Done.

2023-06-07T12:08:27.705146Z  INFO lemmy_server::scheduled_tasks: Reindexing table concurrently comment_aggregates ...

2023-06-07T12:08:27.728006Z  INFO lemmy_server::scheduled_tasks: Done.

2023-06-07T12:08:27.728027Z  INFO lemmy_server::scheduled_tasks: Reindexing table concurrently community_aggregates ...

2023-06-07T12:08:27.756015Z  INFO lemmy_server::scheduled_tasks: Done.

2023-06-07T12:08:27.756146Z  INFO lemmy_server::scheduled_tasks: Clearing old activities...

2023-06-07T12:08:27.757461Z  INFO lemmy_server::scheduled_tasks: Done.

2023-06-07T12:11:53.210731Z DEBUG HTTP request{http.method=GET http.scheme="http" http.host=lemmy.bulwarkob.com http.target=/api/v3/post/list otel.kind="server" request_id=254f6c39-9146-42e9-9353-06c0e6c1cea4}:perform{self=GetPosts { type_: Some(Local), sort: Some(Active), page: Some(1), limit: Some(40), community_id: None, community_name: None, saved_only: None, auth: None }}: lemmy_db_views::post_view: Post View Query: Query { sql: "SELECT \"post\".\"id\", \"post\".\"name\", \"post\".\"url\", \"post\".\"body\", \"post\".\"creator_id\", \"post\".\"community_id\", \"post\".\"removed\", \"post\".\"locked\", \"post\".\"published\", \"post\".\"updated\", \"post\".\"deleted\", \"post\".\"nsfw\", \"post\".\"embed_title\", \"post\".\"embed_description\", \"post\".\"embed_video_url\", \"post\".\"thumbnail_url\", \"post\".\"ap_id\", \"post\".\"local\", \"post\".\"language_id\", \"post\".\"featured_community\", \"post\".\"featured_local\", \"person\".\"id\", \"person\".\"name\", \"person\".\"display_name\", \"person\".\"avatar\", \"person\".\"banned\", \"person\".\"published\", \"person\".\"updated\", \"person\".\"actor_id\", \"person\".\"bio\", \"person\".\"local\", \"person\".\"banner\", \"person\".\"deleted\", \"person\".\"inbox_url\", \"person\".\"shared_inbox_url\", \"person\".\"matrix_user_id\", \"person\".\"admin\", \"person\".\"bot_account\", \"person\".\"ban_expires\", \"person\".\"instance_id\", \"community\".\"id\", \"community\".\"name\", \"community\".\"title\", \"community\".\"description\", \"community\".\"removed\", \"community\".\"published\", \"community\".\"updated\", \"community\".\"deleted\", \"community\".\"nsfw\", \"community\".\"actor_id\", \"community\".\"local\", \"community\".\"icon\", \"community\".\"banner\", \"community\".\"hidden\", \"community\".\"posting_restricted_to_mods\", \"community\".\"instance_id\", \"community_person_ban\".\"id\", \"community_person_ban\".\"community_id\", \"community_person_ban\".\"person_id\", \"community_person_ban\".\"published\", \"community_person_ban\".\"expires\", \"post_aggregates\".\"id\", \"post_aggregates\".\"post_id\", \"post_aggregates\".\"comments\", \"post_aggregates\".\"score\", \"post_aggregates\".\"upvotes\", \"post_aggregates\".\"downvotes\", \"post_aggregates\".\"published\", \"post_aggregates\".\"newest_comment_time_necro\", \"post_aggregates\".\"newest_comment_time\", \"post_aggregates\".\"featured_community\", \"post_aggregates\".\"featured_local\", \"community_follower\".\"id\", \"community_follower\".\"community_id\", \"community_follower\".\"person_id\", \"community_follower\".\"published\", \"community_follower\".\"pending\", \"post_saved\".\"id\", \"post_saved\".\"post_id\", \"post_saved\".\"person_id\", \"post_saved\".\"published\", \"post_read\".\"id\", \"post_read\".\"post_id\", \"post_read\".\"person_id\", \"post_read\".\"published\", \"person_block\".\"id\", \"person_block\".\"person_id\", \"person_block\".\"target_id\", \"person_block\".\"published\", \"post_like\".\"score\", coalesce((\"post_aggregates\".\"comments\" - \"person_post_aggregates\".\"read_comments\"), \"post_aggregates\".\"comments\") FROM ((((((((((((\"post\" INNER JOIN \"person\" ON (\"post\".\"creator_id\" = \"person\".\"id\")) INNER JOIN \"community\" ON (\"post\".\"community_id\" = \"community\".\"id\")) LEFT OUTER JOIN \"community_person_ban\" ON (((\"post\".\"community_id\" = \"community_person_ban\".\"community_id\") AND (\"community_person_ban\".\"person_id\" = \"post\".\"creator_id\")) AND ((\"community_person_ban\".\"expires\" IS NULL) OR (\"community_person_ban\".\"expires\" > CURRENT_TIMESTAMP)))) INNER JOIN \"post_aggregates\" ON (\"post_aggregates\".\"post_id\" = \"post\".\"id\")) LEFT OUTER JOIN \"community_follower\" ON ((\"post\".\"community_id\" = \"community_follower\".\"community_id\") AND (\"community_follower\".\"person_id\" = $1))) LEFT OUTER JOIN \"post_saved\" ON ((\"post\".\"id\" = \"post_saved\".\"post_id\") AND (\"post_saved\".\"person_id\" = $2))) LEFT OUTER JOIN \"post_read\" ON ((\"post\".\"id\" = \"post_read\".\"post_id\") AND (\"post_read\".\"person_id\" = $3))) LEFT OUTER JOIN \"person_block\" ON ((\"post\".\"creator_id\" = \"person_block\".\"target_id\") AND (\"person_block\".\"person_id\" = $4))) LEFT OUTER JOIN \"community_block\" ON ((\"community\".\"id\" = \"community_block\".\"community_id\") AND (\"community_block\".\"person_id\" = $5))) LEFT OUTER JOIN \"post_like\" ON ((\"post\".\"id\" = \"post_like\".\"post_id\") AND (\"post_like\".\"person_id\" = $6))) LEFT OUTER JOIN \"person_post_aggregates\" ON ((\"post\".\"id\" = \"person_post_aggregates\".\"post_id\") AND (\"person_post_aggregates\".\"person_id\" = $7))) LEFT OUTER JOIN \"local_user_language\" ON ((\"post\".\"language_id\" = \"local_user_language\".\"language_id\") AND (\"local_user_language\".\"local_user_id\" = $8))) WHERE ((((((((\"community\".\"local\" = $9) AND ((\"community\".\"hidden\" = $10) OR (\"community_follower\".\"person_id\" = $11))) AND (\"post\".\"nsfw\" = $12)) AND (\"community\".\"nsfw\" = $13)) AND (\"post\".\"removed\" = $14)) AND (\"post\".\"deleted\" = $15)) AND (\"community\".\"removed\" = $16)) AND (\"community\".\"deleted\" = $17)) ORDER BY \"post_aggregates\".\"featured_local\" DESC , hot_rank(\"post_aggregates\".\"score\", \"post_aggregates\".\"newest_comment_time_necro\") DESC , \"post_aggregates\".\"newest_comment_time_necro\" DESC  LIMIT $18 OFFSET $19", binds: [-1, -1, -1, -1, -1, -1, -1, -1, true, false, -1, false, false, false, false, false, false, 40, 0] }

[-] Jattatak@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

(1/2) Alright, thanks for helping.

docker-compose.yml

spoiler

version: "3.3"

networks:
  # communication to web and clients
  lemmyexternalproxy:
  # communication between lemmy services
  lemmyinternal:
    driver: bridge
    internal: true

services:
  lemmy:
    image: dessalines/lemmy
    # this hostname is used in nginx reverse proxy and also for lemmy ui to connect to the backend, do not change
    hostname: lemmy
    networks:
      - lemmyinternal
    restart: always
    environment:
      - RUST_LOG="warn,lemmy_server=debug,lemmy_api=debug,lemmy_api_common=debug,lemmy_api_crud=debug,lemmy_apub=debug,lemmy_db_schema=debug,lemmy_db_views=debug,lemmy_db_views_actor=debug,lemmy_db_views_moderator=debug,lemmy_routes=debug,lemmy_utils=debug,lemmy_websocket=debug"
      - RUST_BACKTRACE=full
    volumes:
      - ./lemmy.hjson:/config/config.hjson:Z
    depends_on:
      - postgres
      - pictrs

  lemmy-ui:
    image: dessalines/lemmy-ui
    # use this to build your local lemmy ui image for development
    # run docker compose up --build
    # assuming lemmy-ui is cloned besides lemmy directory
    # build:
    #   context: ../../lemmy-ui
    #   dockerfile: dev.dockerfile
    networks:
      - lemmyinternal
    environment:
      # this needs to match the hostname defined in the lemmy service
      - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
      # set the outside hostname here
      - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.bulwarkob.com:1236
      - LEMMY_HTTPS=false
      - LEMMY_UI_DEBUG=true
    depends_on:
      - lemmy
    restart: always

  pictrs:
    image: asonix/pictrs:0.4.0-beta.19
    # this needs to match the pictrs url in lemmy.hjson
    hostname: pictrs
    # we can set options to pictrs like this, here we set max. image size and forced format for conversion
    # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp
    networks:
      - lemmyinternal
    environment:
      - PICTRS_OPENTELEMETRY_URL=http://otel:4137
      - PICTRS__API_KEY=API_KEY
      - RUST_LOG=debug
      - RUST_BACKTRACE=full
      - PICTRS__MEDIA__VIDEO_CODEC=vp9
      - PICTRS__MEDIA__GIF__MAX_WIDTH=256
      - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
      - PICTRS__MEDIA__GIF__MAX_AREA=65536
      - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
    user: 991:991
    volumes:
      - ./volumes/pictrs:/mnt:Z
    restart: always

  postgres:
    image: postgres:15-alpine
    # this needs to match the database host in lemmy.hson
    # Tune your settings via
    # https://pgtune.leopard.in.ua/#/
    # You can use this technique to add them here
    # https://stackoverflow.com/a/30850095/1655478
    hostname: postgres
    command:
      [
        "postgres",
        "-c",
        "session_preload_libraries=auto_explain",
        "-c",
        "auto_explain.log_min_duration=5ms",
        "-c",
        "auto_explain.log_analyze=true",
        "-c",
        "track_activity_query_size=1048576",
      ]
    networks:
      - lemmyinternal
      # adding the external facing network to allow direct db access for devs
      - lemmyexternalproxy
    ports:
      # use a different port so it doesnt conflict with potential postgres db running on the host
      - "5433:5432"
    environment:
      - POSTGRES_USER=noUsrHere
      - POSTGRES_PASSWORD=noPassHere
      - POSTGRES_DB=noDbHere
    volumes:
      - ./volumes/postgres:/var/lib/postgresql/data:Z
    restart: always

The NGINX I am using is not the one that came with the stack, but a separate single container for nginx-proxy-manager. I did not customize the conf that it installed with, and only used the UI to set up the proxy host and SSL, both of which are working (front end, at least.). The config seems to be unrelated on this, however I can share it if the rest of the information below is not enough.

[-] Jattatak@beehaw.org 1 points 1 year ago

Thanks for the tip. My VPS currently has several containers with services running that I use for myself and friend group. Though I DID try to ansible to a blank local VM for testing, and I couldnt really even get the Ansible installation working on my control node. I am not very well versed in Linux yet. I scrape by for most things, but much yet eludes me.

I am trying to stick it out with Docker for now. My NGINX Proxy manager seems to work fine for proxying my partially broken setup currently, and I dont think I will have any issues once I can work out the kinks in the other containers.

12
submitted 1 year ago* (last edited 1 year ago) by Jattatak@beehaw.org to c/lemmy_support@lemmy.ml

Hello! I have been struggling through a few tutorials on getting a lemmy instance to work correctly when setup with Docker. I have it mostly done, but there are various issues each time that I do not have the knowledge to properly correct. I am familiar with Docker, and already have an Oracle VPS set up on ARM64 Ubuntu. I already have portainer and an NGINX proxy set up and working okay. I have an existing lemmy instance "running" but not quite working. My best guess here would be to have someone assist with setting up the docker-compose to work with current updates/settings, as well as the config.hjson.

TIA, and I cant wait to have my own entry into the fediverse working right!

view more: next ›

Jattatak

joined 1 year ago