jascha's blog

Words on privacy, cybersecurity, decentralization, and AI

In recent years, the open source software community has witnessed a notable shift in licensing strategies. Tech giants and startups alike are increasingly adopting more restrictive open source licenses or even transitioning to closed source models. This trend has sparked debates about the future of open source and the delicate balance between community contribution and commercial interests.

The Rise of Restrictive Open Source Licenses

One of the most talked-about developments in this space is the growing popularity of the GNU Affero General Public License (AGPL). Unlike more permissive licenses like MIT or Apache 2.0, AGPL requires users who modify the software and run it as a service to release their changes publicly.

Several companies have turned to AGPL as a means to encourage contributions while protecting their commercial interests. MongoDB, for instance, switched to the Server Side Public License (SSPL) in 2018, which is essentially a modified version of AGPL. The move was prompted by concerns that cloud providers were offering MongoDB as a service without contributing back to the core product.

The Mixed License Approach

Some companies are opting for a mixed license strategy, maintaining different licenses for different parts of their software ecosystem. This approach allows them to keep core components open while monetizing specific features or services.

Confluent, the company behind Apache Kafka, adopted this strategy in 2018. While keeping the core Kafka project under the Apache 2.0 license, they placed some of their proprietary features under the Confluent Community License, which restricts the software's use as a competing managed service.

From Open to Closed: The Terraform Controversy

Perhaps one of the most controversial moves in recent memory is HashiCorp's decision to change Terraform's license from Mozilla Public License v2.0 to the Business Source License (BSL). This shift, announced in August 2023, effectively transitioned Terraform from open source to source-available software.

HashiCorp cited the need to build a sustainable business model as the primary reason for this change. However, the decision was met with significant backlash from the community, leading to the creation of OpenTF, a community-driven fork of Terraform.

Red Hat's Licensing Saga

Red Hat, long considered a champion of open source, has also made headlines with its licensing decisions. In 2023, the company announced changes to how it distributes Red Hat Enterprise Linux (RHEL) source code, limiting access to paying customers and partners.

This move sparked controversy within the open source community, leading to the creation of alternative distributions like AlmaLinux and Rocky Linux. The situation highlights the ongoing tension between maintaining open source principles and protecting commercial interests.

The Impact on the Open Source Ecosystem

These licensing shifts have far-reaching implications for the open source ecosystem:

  1. Community Trust: Changes in licensing can erode trust between companies and their open source communities.
  2. Innovation: More restrictive licenses might slow down innovation by limiting the ability of developers to freely use and modify software.
  3. Forking: As seen with Terraform and RHEL, controversial licensing changes can lead to community-driven forks.
  4. Business Models: Companies are forced to rethink their business models to balance open source ideals with financial sustainability.

Looking Ahead

As the open source landscape continues to evolve, we can expect ongoing debates about the most appropriate licensing models. Companies will need to carefully balance their need for sustainable revenue streams with the benefits of community-driven development and innovation.

The challenge moving forward will be to find licensing models that protect commercial interests without alienating the developer communities that have been instrumental in the success of open source software. As this story unfolds, one thing is clear: the definition of “open source” and what it means for both businesses and developers is undergoing a significant transformation.


Follow me on Twitter

LLM Prompting

  1. Zero-shot prompting: This technique involves giving the model a task with no examples or prior information. The model relies entirely on its pre-existing knowledge to generate a response.

    • Example: “Translate the following sentence to French: 'I am learning how to code.'”
  2. One-shot prompting: In this approach, the model is provided with one example of the task before being asked to generate a response. This helps the model understand the desired format and context.

    • Example: “Translate the following sentence to French. Example: 'I love programming.' → 'J'aime programmer.' Now, translate: 'I am learning how to code.'”
  3. Few-shot prompting: This technique involves giving the model several examples of the task to help it understand the context and format better. It is particularly useful for complex tasks.

    • Example: “Translate the following sentences to French. Example 1: 'I love programming.' → 'J'aime programmer.' Example 2: 'She enjoys reading books.' → 'Elle aime lire des livres.' Now, translate: 'I am learning how to code.'”
  4. Instruction-based prompting: This method uses clear and explicit instructions to guide the model's output. It focuses on detailing the task requirements and expectations.

    • Example: “Please translate the following sentence into French: 'I am learning how to code.' Ensure the translation is accurate and maintains the original meaning.”
  5. Chain-of-thought prompting: This technique encourages the model to think through the problem step-by-step, which can be helpful for complex or multi-step tasks. It guides the model to break down the process into logical steps.

    • Example: “Translate the sentence 'I am learning how to code' into French. First, identify the subject ('I'), then the verb phrase ('am learning'), and finally the object ('how to code'). Now, translate each part and combine them into a coherent sentence.”
  6. Big-Ass-Prompt: Often referred to as “BAP” or “Big Prompt,” is a technique used to improve the performance of large language models by providing extensive and detailed context or examples within the prompt. This method leverages the model's ability to understand and generate more accurate responses by giving it a substantial amount of information to work with.

    • Example: Suppose the task is to generate a summary of a given article. A Big Ass Prompt might look like this:

“Please summarize the following article. First, read the entire article carefully. Identify the main points, key arguments, and important details. Then, write a concise summary that includes the following:

1. The main topic of the article. 2. The primary arguments or points made by the author. 3. Any significant data or statistics mentioned. 4. The conclusion or final thoughts of the author.

Here is the article: [Insert full article text]

Example Summaries:

'The article discusses the impact of climate change on global agriculture. It highlights the increasing frequency of extreme weather events and their effects on crop yields. Key data from the article includes a 20% reduction in wheat production in certain regions. The author concludes by emphasizing the need for sustainable farming practices to mitigate these impacts.' 'This article explores the rise of remote work in the tech industry. It outlines the benefits such as increased flexibility and cost savings for companies, as well as challenges like maintaining team cohesion. The article cites a survey where 60% of respondents preferred remote work over traditional office setups. The conclusion stresses the importance of adapting management strategies to support remote teams effectively.'

Now, summarize the provided article in a similar format.”


Follow me on Twitter

Based on the Society of Automotive Engineers (SAE) levels of driving automation, adapted to define the levels of autonomy for AI Agents.

Agent Evolution

  1. Level 0 (No Autonomy): The AI agent has no autonomous capabilities and requires human input for all tasks.

  2. Level 1 (Task Assistance): The AI agent can assist with specific tasks, such as data analysis or content generation, but requires human guidance and oversight.

  3. Level 2 (Partial Autonomy): The AI agent can perform some tasks autonomously, such as decision-making in limited domains, but requires human intervention for more complex tasks or in case of uncertainties.

  4. Level 3 (Conditional Autonomy): The AI agent can handle most tasks autonomously within a specific domain, but may request human intervention in edge cases or when faced with situations outside its training scope.

  5. Level 4 (High Autonomy): The AI agent can handle a wide range of tasks autonomously across multiple domains, with minimal human intervention. It can adapt to new situations and learn from its experiences.

  6. Level 5 (Full Autonomy): The AI agent can perform all tasks autonomously, without requiring human intervention. It can continuously learn, improve, and adapt to new situations across all domains.


Follow me on Twitter

TV Time!

Say you have a directory of videos going to stream using OBS. You may want to sync them to the actual time to provide a guide of what is playing and will be coming up. You need to have ffmpeg installed to use ffprobe.

Add the videos to a directory. Then run this little bash script to get the total time in seconds:

for f in ./*.webm
do ffprobe -v quiet -of csv=p=0 -show_entries format=duration "$f"
done | awk '{sum += $1}; END{print sum}'

The above example is only getting times for .webm videos. If you have a variety of video file types can use the wildcard * to get all.


Follow me on Twitter

Nostr Live Streaming!

NOTE: Assumes using a Debian-based OS.

First, install nginx and rtmp module

sudo apt-get install libnginx-mod-rtmp nginx

Edit your /etc/nginx/nginx.conf

Below, change yoursite.org to your site's DNS domain.

Also, change <YOUR-IP-ADDRESS> to your IP will be streaming to the server from (aka home IP).

load_module modules/ngx_rtmp_module.so;
worker_processes auto;
#rtmp_auto_push on;
events {
        worker_connections 1024;
        multi_accept on;
}

rtmp {
        server {
        listen 1935;
        listen [::]:1935 ipv6only=on;
        chunk_size 4096;
        allow publish <YOUR-IP-ADDRESS>;
        deny publish all;

        application live {
            live on;
            hls on;
            hls_path /video/hls;
            record off;
        }
    }
}
http {
    server {
        server_name yoursite.org www.yoursite.org;
        listen 80;

        # Static website
        location / {
        root /var/www/html;
        index index.html;
            }

        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
            }
            root /video;
            add_header Cache-Control no-cache;

            # To avoid issues with cross-domain HTTP requests (e.g. during development)
            add_header Access-Control-Allow-Origin *;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

        }
    }
   server {
        server_name yoursite.org www.yoursite.org;
        listen 443 ssl;
        listen [::]:443 ipv6only=on;

        ssl_certificate /etc/letsencrypt/live/yoursite.org/fullchain.pem; 
        ssl_certificate_key /etc/letsencrypt/live/yoursite.org/privkey.pem; 
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

        # Static website
        location / {
        root /var/www/html;
        index index.html;
            }
        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
            }
            root /video;
            add_header Cache-Control no-cache;

            # To avoid issues with cross-domain HTTP requests (e.g. during development)
            add_header Access-Control-Allow-Origin *;

        }
    }
}

Make sure port 1935 is accessible to with world or at least <YOUR-IP-ADDRESS> provided. Also be sure the path for the stream is valid where in config is now /video.

Be sure also to get a new SSL certificate from LetsEncrypt/Certbot and edit the SSL config portion above.

In OBS, set the stream URL to rtmp://server-IP:1935/live

For the stream key, choose what the .m3u8 will be called, like stream for this example.

Start streaming something in OBS and on a mobile device using a carrier connection open https://yoursite.org/hls/stream.m3u8

If all is working as expected should see the test stream! You are now streaming without using Cloudflare or another walled garden network.


Follow me on Twitter

Add new service file to /etc/systemd/system/strfry-relayable.service

Add options, change user and paths as needed.

[Unit]
Description=strfry stream up to relayable

[Service]
User=strfry
ExecStart=/usr/local/bin/strfry stream wss://relayable.org --dir up
Restart=on-failure
RestartSec=5
ProtectHome=yes
NoNewPrivileges=yes
ProtectSystem=full
LimitCORE=524288

[Install]
WantedBy=multi-user.target

Enable it to start automatically on reboot: sudo systemctl enable strfry-relayable.service

Start service: sudo systemctl start strfry-relayable

Check Status: sudo systemctl status strfry-relayable

#V4V #nostr #relayable #relay #strfry


Follow me on Twitter

Filter Nginx Logs

Tired of your nginx logs filling up with health checks of other user agents you do not need to log?

This example assumes you use Pingdom, Amazon-Route53, and UptimeRobot health checks. Change or add to fit your needs.

map $http_user_agent $log_ua {

~Pingdom 0;
~Amazon-Route53 0;
~UptimeRobot 0;

default 1;

}

server {

...

access_log /var/log/nginx/access.log combined if=$log_ua;

}

Follow me on Twitter

Ventoy ISO USB EDC

One of the things that are always on my key chain is a bootable USB key with Ventoy and multiple ISOs or IMGs. I often use this for everything from installing Linux or Windows (if I have to). To disk imaging, pentesting, network troubleshooting, and digital forensics.

Here is a list of must-have ISOs/IMGs to always have in your pocket:

You can also add any Linux distros or other tools you'd like by dragging them into the Ventoy drive.


Follow me on Twitter

picsur

In the nostr community, there are a few image hosting providers that users and apps use. If you'd like to host your own images so you have control over them, you can easily do so with Picsur. It is Beta, but self-hosting images are up to the task.

Go to GitHub and copy the docker-compose file. You will want to use more secure users and passwords since using this for public-facing access.

This assumes you already have Docker and docker-compose installed.

Save the docker-compose.yml and start it to test.

sudo docker-compose up

Go to IP:8080 and log in with admin/picsur. (e.g. http://localhost:8080)

You may want to change your password and enable “Keep original file” under Settings > General.

Once tested, ensure everything works, stop and run as a daemon. Note that you'll also want to uncomment some of the options to secure the database with stronger user/passwords.

sudo docker-compose up -d

You'll also want to put behind SSL nginx reverse proxy or some other.

V4V ⚡


Follow me on Twitter

strfry hearts tor

This assumes you already have nginx installed for your clearnet relay reverse proxy and have strfry running on port 7777.

Install Tor and Socat:

sudo apt update
sudo apt install tor socat

Add Tor config needed:

sudo nano /etc/tor/torrc

Config to add:

HiddenServiceDir /var/lib/tor/hidden_service/
HiddenServicePort 80 127.0.0.1:80

Now restart Tor service:

sudo systemctl restart tor

Grab the .onion address (alternartively can generate a custom one with mkp224o ):

cat /var/lib/tor/hidden_service/hostname 

Save this address for future steps.

Now add config for socat:

mkdir -p /etc/http-to-socks-proxy/
nano /etc/http-to-socks-proxy/strfrysocat.conf

Add this config replacing .onion with yours:

PROXY_HOST=127.0.0.1
PROXY_PORT=9050
LOCAL_PORT=9081
REMOTE_HOST=yourtorhiddenserviceurl.onion
REMOTE_PORT=80

Enable strfry socat service:

ln -s /etc/systemd/system/http-to-socks-proxy\@.service /etc/systemd/system/multi-user.target.wants/http-to-socks-proxy\@strfrysocat.service

Start service:

sudo systemctl start http-to-socks-proxy@strfrysocat

Check status:

sudo systemctl status http-to-socks-proxy@strfrysocat

Verify tunnel is active:

netstat -tulpn | grep socat

Add nginx config for .onion site. This example of my config where website and relay websockets on same top-level domain. Edit yours to fit yours:

sudo nano /etc/nginx/sites-available/strfry-tor.conf

Add port 80 config for .onion site:

server {
        listen 80;
        server_name yourtorhiddenserviceurl.onion; #replace with the tor hostname

  location / {
    error_page 418 = @websocket;

        if ($http_accept = "application/nostr+json") {
        return 418;
    }
    try_files /nonexistent @$http_upgrade;
  }

  location @websocket {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_pass http://127.0.0.1:7777;
            proxy_http_version 1.1;
            send_timeout 300s;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
  }

  location @ {
        root /var/www/html;
        index index.html;
  }
}

Enable conf and restart nginx:

cd ../sites-enabled
ln -s ../sites-available/strfry-tor.conf 

sudo systemctl restart nginx

Now use Tor browser or can run Tails ISO as a VM with gossip installed to test working correctly.


Follow me on Twitter