jascha's blog

Words on privacy, cybersec, decentralization, and nostr

TV Time!

Say you have a directory of videos going to stream using OBS. You may want to sync them to the actual time to provide a guide of what is playing and will be coming up. You need to have ffmpeg installed to use ffprobe.

Add the videos to a directory. Then run this little bash script to get the total time in seconds:

for f in ./*.webm
do ffprobe -v quiet -of csv=p=0 -show_entries format=duration "$f"
done | awk '{sum += $1}; END{print sum}'

The above example is only getting times for .webm videos. If you have a variety of video file types can use the wildcard * to get all.


Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

Nostr Live Streaming!

NOTE: Assumes using a Debian-based OS.

First, install nginx and rtmp module

sudo apt-get install libnginx-mod-rtmp nginx

Edit your /etc/nginx/nginx.conf

Below, change yoursite.org to your site's DNS domain.

Also, change <YOUR-IP-ADDRESS> to your IP will be streaming to the server from (aka home IP).

load_module modules/ngx_rtmp_module.so;
worker_processes auto;
#rtmp_auto_push on;
events {
        worker_connections 1024;
        multi_accept on;
}

rtmp {
        server {
        listen 1935;
        listen [::]:1935 ipv6only=on;
        chunk_size 4096;
        allow publish <YOUR-IP-ADDRESS>;
        deny publish all;

        application live {
            live on;
            hls on;
            hls_path /video/hls;
            record off;
        }
    }
}
http {
    server {
        server_name yoursite.org www.yoursite.org;
        listen 80;

        # Static website
        location / {
        root /var/www/html;
        index index.html;
            }

        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
            }
            root /video;
            add_header Cache-Control no-cache;

            # To avoid issues with cross-domain HTTP requests (e.g. during development)
            add_header Access-Control-Allow-Origin *;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

        }
    }
   server {
        server_name yoursite.org www.yoursite.org;
        listen 443 ssl;
        listen [::]:443 ipv6only=on;

        ssl_certificate /etc/letsencrypt/live/yoursite.org/fullchain.pem; 
        ssl_certificate_key /etc/letsencrypt/live/yoursite.org/privkey.pem; 
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

        # Static website
        location / {
        root /var/www/html;
        index index.html;
            }
        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
            }
            root /video;
            add_header Cache-Control no-cache;

            # To avoid issues with cross-domain HTTP requests (e.g. during development)
            add_header Access-Control-Allow-Origin *;

        }
    }
}

Make sure port 1935 is accessible to with world or at least <YOUR-IP-ADDRESS> provided. Also be sure the path for the stream is valid where in config is now /video.

Be sure also to get a new SSL certificate from LetsEncrypt/Certbot and edit the SSL config portion above.

In OBS, set the stream URL to rtmp://server-IP:1935/live

For the stream key, choose what the .m3u8 will be called, like stream for this example.

Start streaming something in OBS and on a mobile device using a carrier connection open https://yoursite.org/hls/stream.m3u8

If all is working as expected should see the test stream! You are now streaming without using Cloudflare or another walled garden network.


Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

Add new service file to /etc/systemd/system/strfry-relayable.service

Add options, change user and paths as needed.

[Unit]
Description=strfry stream up to relayable

[Service]
User=strfry
ExecStart=/usr/local/bin/strfry stream wss://relayable.org --dir up
Restart=on-failure
RestartSec=5
ProtectHome=yes
NoNewPrivileges=yes
ProtectSystem=full
LimitCORE=524288

[Install]
WantedBy=multi-user.target

Enable it to start automatically on reboot: sudo systemctl enable strfry-relayable.service

Start service: sudo systemctl start strfry-relayable

Check Status: sudo systemctl status strfry-relayable

#V4V #nostr #relayable #relay #strfry


Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

Filter Nginx Logs

Tired of your nginx logs filling up with health checks of other user agents you do not need to log?

This example assumes you use Pingdom, Amazon-Route53, and UptimeRobot health checks. Change or add to fit your needs.

map $http_user_agent $log_ua {

~Pingdom 0;
~Amazon-Route53 0;
~UptimeRobot 0;

default 1;

}

server {

...

access_log /var/log/nginx/access.log combined if=$log_ua;

}

Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

Ventoy ISO USB EDC

One of the things that are always on my key chain is a bootable USB key with Ventoy and multiple ISOs or IMGs. I often use this for everything from installing Linux or Windows (if I have to). To disk imaging, pentesting, network troubleshooting, and digital forensics.

Here is a list of must-have ISOs/IMGs to always have in your pocket:

You can also add any Linux distros or other tools you'd like by dragging them into the Ventoy drive.


Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

picsur

In the nostr community, there are a few image hosting providers that users and apps use. If you'd like to host your own images so you have control over them, you can easily do so with Picsur. It is Beta, but self-hosting images are up to the task.

Go to GitHub and copy the docker-compose file. You will want to use more secure users and passwords since using this for public-facing access.

This assumes you already have Docker and docker-compose installed.

Save the docker-compose.yml and start it to test.

sudo docker-compose up

Go to IP:8080 and log in with admin/picsur. (e.g. http://localhost:8080)

You may want to change your password and enable “Keep original file” under Settings > General.

Once tested, ensure everything works, stop and run as a daemon. Note that you'll also want to uncomment some of the options to secure the database with stronger user/passwords.

sudo docker-compose up -d

You'll also want to put behind SSL nginx reverse proxy or some other.

V4V ⚡


Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

strfry hearts tor

This assumes you already have nginx installed for your clearnet relay reverse proxy and have strfry running on port 7777.

Install Tor and Socat:

sudo apt update
sudo apt install tor socat

Add Tor config needed:

sudo nano /etc/tor/torrc

Config to add:

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80 127.0.0.1:80

Now restart Tor service:

sudo systemctl restart tor

Grab the .onion address (alternartively can generate a custom one with mkp224o ):

cat /var/lib/tor/hidden_service/hostname 

Save this address for future steps.

Now add config for socat:

mkdir -p /etc/http-to-socks-proxy/
nano /etc/http-to-socks-proxy/strfrysocat.conf

Add this config replacing .onion with yours:

PROXY_HOST=127.0.0.1
PROXY_PORT=9050
LOCAL_PORT=9081
REMOTE_HOST=yourtorhiddenserviceurl.onion
REMOTE_PORT=80

Enable strfry socat service:

ln -s /etc/systemd/system/http-to-socks-proxy\@.service /etc/systemd/system/multi-user.target.wants/http-to-socks-proxy\@strfrysocat.service

Start service:

sudo systemctl start http-to-socks-proxy@strfrysocat

Check status:

sudo systemctl status http-to-socks-proxy@strfrysocat

Verify tunnel is active:

netstat -tulpn | grep socat

Add nginx config for .onion site. This example of my config where website and relay websockets on same top-level domain. Edit yours to fit yours:

sudo nano /etc/nginx/sites-available/strfry-tor.conf

Add port 80 config for .onion site:

server {
        listen 80;
        server_name yourtorhiddenserviceurl.onion; #replace with the tor hostname

  location / {
    error_page 418 = @websocket;

        if ($http_accept = "application/nostr+json") {
        return 418;
    }
    try_files /nonexistent @$http_upgrade;
  }

  location @websocket {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_pass http://127.0.0.1:7777;
            proxy_http_version 1.1;
            send_timeout 300s;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
  }

  location @ {
        root /var/www/html;
        index index.html;
  }
}

Enable conf and restart nginx:

cd ../sites-enabled
ln -s ../sites-available/strfry-tor.conf 

sudo systemctl restart nginx

Now use Tor browser or can run Tails ISO as a VM with gossip installed to test working correctly.


Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

Cloudflare is a controversial company to some, considering that they have taken steps to block sites and organizations that promote specific ideas or topics. While I am not supporting any of these organizations, there is a deeper issue with using Cloudflare in front of public relays. For people running personal ones, it does have the advantage of obfuscating your home IP address. But this post is focused on people providing general public relays.

One of Nostr's selling points is clients' decentralized (distributed?) architecture using relays. Relays help to avoid the centralization issue witnessed by Twitter, Facebook, and other social media companies. We build this whole decentralized architecture to put it behind a centralized company's network?

Some of the Biggest Relays Using Cloudflare for DNS/Proxying: – nostr.wine – relay.damus.io – relay.snort.social – X.nostr.land

The above is not an exhaustive list, but you can check your relays by going to a site like digwebinterface.com, choosing Type: NS, and checking Authoritative.

Example: https://www.digwebinterface.com/?hostnames=relay.damus.io&type=NS&useresolver=8.8.4.4&ns=auth&nameservers=

You can also choose Type: A and do an IPWhois.

Relayable.org will never use Cloudflare. However, we use cloud providers like Amazon Web Services (AWS). But the overall architecture allows us to quickly spin up Docker containers for a new relay and load up a copy of the DB in an automated fashion using Ansible and Terraform. Then it is a matter of repointing DNS to the new relay. Creating new relays can all be done in a couple of minutes, which makes the cloud or VPSs relays are running on not as much of an issue. Finally, we back up the DB offsite on safe harbors of encrypted storage.

I'm not encouraging people not to use the above relays. However, putting relays behind Cloudflare is not a feasible approach to a censorship-resistant network. It is a good idea to make sure you have a good mix of relays, with some (but not all) using Cloudflare.

I'd be glad to help any relay admins move off Cloudflare. We will add much more documentation on Relayable.org to be completely transparent in our configs, architecture, and operations.


Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

When setting up a #nostr relay like strfry, you may want to use a top-level domain for your relay and the website about the relay. To be NIP-11 compliant, you need to allow calls with the header accept: application/nostr+json to hit strfry. The below nginx config allows for this.

server {

        server_name tld-relay.com;

location / {
    error_page 418 = @websocket;
# Check for NIP-11 accept to send to strfry 
        if ($http_accept = "application/nostr+json") {
        return 418;
    }

# Dummy entry to send requests to relay or static site
    try_files /nonexistent @$http_upgrade;
  }
# WSS proxy for strfry relay
  location @websocket {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_pass http://127.0.0.1:7777;
            proxy_http_version 1.1;
            proxy_read_timeout 300s;
            proxy_connect_timeout 300s;
            proxy_send_timeout 300s;
            send_timeout 300s;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
  }

# Static relay website
  location @ {
        root /var/www/html;
        index index.html;

  }

Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology

Run your own nostr relay!

Overview

In order to have a resilient decentralized nostr network there needs to be a good distribution of relays. Avoiding the caveat of too many large (centralized) relays, many of unknown architecture and availability. It is not too difficult to run your own private relay at home or on an inexpensive cloud provider.

The following is based on the unofficial strfry docker repo: https://hub.docker.com/r/relayable/strfry

Install Requirements

(In this example we're assuming your host is running Ubuntu 22.04 but should work with most Debian based OSs)

On your cloud server or home server be sure to install Docker.

Now install docker-compose:

curl -SL https://github.com/docker/compose/releases/download/v2.16.0/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose

Make executable:

sudo chmod +x /usr/local/bin/docker-compose

Point DNS to IP of Relay

In your DNS registrar or hosting control panel add an A record for relay.yourdomain.com to your instance public IP. You can alternatively use dynamic DNS if hosting from home. If using from home I recommend using a free CloudFlare account to proxy DNS to your home IP to obfuscate it. There are many videos and howtos on this.

Create docker-compose.yml

Make a docker-compose.yml file with contents below:

services:
  strfry-nostr-relay:
    image: relayable/strfry:latest
    restart: unless-stopped
    volumes:
      - /local/path/to/strfry-data/etc:/etc/
      - /local/path/to/strfry-data/strfry-db:/app/strfry-db
      - /local/path/to/strfry-data/plugins:/app/plugins
    ports:
      - "7777:7777"

Add whitelist.js Plugin to Lock Down Relay

Add the following to you plugins directory:

#!/usr/bin/env node

const whiteList = {
    '003ba9b2c5bd8afeed41a4ce362a8b7fc3ab59c25b6a1359cae9093f296dac01': true,
};

const rl = require('readline').createInterface({
  input: process.stdin,
  output: process.stdout,
  terminal: false
});

rl.on('line', (line) => {
    let req = JSON.parse(line);

    if (req.type === 'lookback') { 
        return; // do nothing
    } 

    if (req.type !== 'new') {
        console.error("unexpected request type"); // will appear in strfry logs
        return;
    }

    let res = { id: req.event.id }; // must echo the event's id

    if (whiteList[req.event.pubkey]) {
        res.action = 'accept';
    } else {
        res.action = 'reject';
        res.msg = 'blocked: not on white-list';
    }

    console.log(JSON.stringify(res));
});

Change the hex public key (003ba9b2c5bd8afeed41a4ce362a8b7fc3ab59c25b6a1359cae9093f296dac01) to yours and add others you want to allow to use the relay.

Make it executable:

sudo chmod +x whitelist.js

Create your strfry.conf in your /etc directory from above docker-compose.yml

##
## Default strfry config for relayable/strfry Docker
##

# Directory that contains the strfry LMDB database (restart required)
db = "./strfry-db/"

dbParams {
    # Maximum number of threads/processes that can simultaneously have LMDB transactions open (restart required)
    maxreaders = 256

    # Size of mmap() to use when loading LMDB (default is 10TB, does *not* correspond to disk-space used) (restart required)
    mapsize = 10995116277760
}

relay {
    # Interface to listen on. Use 0.0.0.0 to listen on all interfaces (restart required)
    bind = "0.0.0.0"

    # Port to open for the nostr websocket protocol (restart required)
    port = 7777

    # Set OS-limit on maximum number of open files/sockets (if 0, don't attempt to set) (restart required)
    nofiles = 1000000

    # HTTP header that contains the client's real IP, before reverse proxying (ie x-real-ip) (MUST be all lower-case)
    realIpHeader = ""

    info {
        # NIP-11: Name of this server. Short/descriptive (< 30 characters)
        name = "strfry docker test"

        # NIP-11: Detailed information about relay, free-form
        description = "This is a strfry instance."

        # NIP-11: Administrative nostr pubkey, for contact purposes
        pubkey = "unset"

        # NIP-11: Alternative administrative contact (email, website, etc)
        contact = "unset"
    }

    # Maximum accepted incoming websocket frame size (should be larger than max event and yesstr msg) (restart required)
    maxWebsocketPayloadSize = 131072

    # Websocket-level PING message frequency (should be less than any reverse proxy idle timeouts) (restart required)
    autoPingSeconds = 55

    # If TCP keep-alive should be enabled (detect dropped connections to upstream reverse proxy)
    enableTcpKeepalive = false

    # How much uninterrupted CPU time a REQ query should get during its DB scan
    queryTimesliceBudgetMicroseconds = 10000

    # Maximum records that can be returned per filter
    maxFilterLimit = 500

    # Maximum number of subscriptions (concurrent REQs) a connection can have open at any time
    maxSubsPerConnection = 20

    writePolicy {
        # If non-empty, path to an executable script that implements the writePolicy plugin logic
        plugin = "./plugins/whitelist.js"

        # Number of seconds to search backwards for lookback events when starting the writePolicy plugin (0 for no lookback)
        lookbackSeconds = 0
    }

    compression {
        # Use permessage-deflate compression if supported by client. Reduces bandwidth, but slight increase in CPU (restart required)
        enabled = true

        # Maintain a sliding window buffer for each connection. Improves compression, but uses more memory (restart required)
        slidingWindow = true
    }

    logging {
        # Dump all incoming messages
        dumpInAll = false

        # Dump all incoming EVENT messages
        dumpInEvents = false

        # Dump all incoming REQ/CLOSE messages
        dumpInReqs = false

        # Log performance metrics for initial REQ database scans
        dbScanPerf = false
    }

    numThreads {
        # Ingester threads: route incoming requests, validate events/sigs (restart required)
        ingester = 3

        # reqWorker threads: Handle initial DB scan for events (restart required)
        reqWorker = 3

        # reqMonitor threads: Handle filtering of new events (restart required)
        reqMonitor = 3

        # yesstr threads: Experimental yesstr protocol (restart required)
        yesstr = 1
    }
}

events {
    # Maximum size of normalised JSON, in bytes
    maxEventSize = 65536

    # Events newer than this will be rejected
    rejectEventsNewerThanSeconds = 900

    # Events older than this will be rejected
    rejectEventsOlderThanSeconds = 94608000

    # Ephemeral events older than this will be rejected
    rejectEphemeralEventsOlderThanSeconds = 60

    # Ephemeral events will be deleted from the DB when older than this
    ephemeralEventsLifetimeSeconds = 300

    # Maximum number of tags allowed
    maxNumTags = 2000

    # Maximum size for tag values, in bytes
    maxTagValSize = 1024
}

Under the info section can add name, description, pubkey, and contact to fit your relay.

If using below nginx config you can change bind back to 127.0.0.1 to make it more secure. Change any other settings you feel confident you need to alter.

Start your container with docker-compose to test working from directory with the docker-compose.yml file: sudo docker-compose up

This will start container in terminal. Once you are happy configuration is working can start as a daemon: sudo docker-compose up -d

Add Nginx Reverse Proxy and SSL

Install nginx on your relay: sudo apt-get update && sudo apt-get install nginx certbot python3-certbot-nginx

Remove default config: sudo rm -rf /etc/nginx/sites-available/default

Create new default config: sudo nano /etc/nginx/sites-available/default Add new reverse proxy config:

    server{
        server_name relay.yourdomain.com;
        location / {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_pass http://127.0.0.1:7777;
            proxy_http_version 1.1;
            proxy_read_timeout 300s;
            proxy_connect_timeout 300s;
            proxy_send_timeout 300s;
            send_timeout 300s;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }

Change relay.yourdomain.com to your DNS name.

Restart nginx: sudo systemctl restart nginx

Add LetsEncrypt SSL Certificate

Use certbot to create new SSL and install it with nginx-plugin (replace with your DNS name): sudo certbot --nginx -d relay.yourdomain.com

Restart nginx again: sudo systemctl restart nginx

If no errors then good to go!

Testing and Usage

You can now install something like nostril to test your relay. Just use a testing nostr account you add to whitelist.js to test with. Or add relay to your client.

nostril --envelope --sec <your sec hex key> --content "docker container is working and whitelisting!" | websocat ws://localhost:7777

Using Other strfry Commands

See container name or ID: sudo docker ps

Enter container to get bash access: sudo docker exec -it <container> /bin/bash

This will show you have entered the running container. You can now run any strfry commands needed. See strfry readme for more.

bash-5.1# ./strfry --help

Congrats you now have a working strfry nostr relay!

By npub1y3uh89v5a4vq92t8q0j6su94zhvcdxpywjn3l6hpsr5welarqtrqj7yzhd  @jascha.

V4V lightning:crispactor61@walletofsatoshi.com


Follow me on nostr Follow me on Mastodon Join Us Citizenry.Technology