jascha's blog

Words on privacy, cybersecurity, decentralization, and AI


Follow me on Twitter

CyberSec AI Agents

In the ever-evolving landscape of cybersecurity, organizations are constantly faced with new threats and challenges. As the demand for skilled cybersecurity professionals continues to grow, the talent gap in this critical field has become increasingly apparent. However, the emergence of AI agents promises to revolutionize the way we approach cybersecurity, enabling organizations to enhance their defenses and maximize the productivity of their existing workforce.

AI agents are intelligent, autonomous systems that can learn from their environment and make decisions based on vast amounts of data. In the context of cybersecurity, these agents can be deployed to monitor networks, detect anomalies, and respond to threats in real-time. By leveraging the power of machine learning and artificial intelligence, AI agents can analyze massive volumes of data and identify patterns that human analysts might miss, thereby significantly improving the accuracy and efficiency of threat detection.

One of the most significant advantages of AI agents in cybersecurity is their ability to scale and adapt to the ever-changing threat landscape. Traditional security tools and manual processes often struggle to keep pace with the rapid evolution of cyber threats. In contrast, AI agents can continuously learn and improve their performance, allowing them to stay ahead of the curve and detect even the most sophisticated attacks.

Moreover, AI agents can help organizations overcome the talent gap by empowering their existing cybersecurity professionals to be more productive and effective. By automating routine tasks and providing real-time insights, AI agents can free up valuable time for human experts to focus on more complex and strategic initiatives. This not only improves the overall efficiency of the cybersecurity team but also enables them to make more informed decisions and respond to threats more quickly.

Another key benefit of AI agents in cybersecurity is their ability to enhance collaboration and knowledge sharing within organizations. By serving as a centralized repository of threat intelligence and best practices, AI agents can facilitate the exchange of information and insights among team members, regardless of their location or level of expertise. This helps to break down silos and ensure that everyone has access to the latest threat data and mitigation strategies.

It is important to note, however, that AI agents are not a replacement for human expertise in cybersecurity. While these intelligent systems can greatly enhance an organization's security posture, they still require human oversight and guidance to ensure that they are functioning as intended. Cybersecurity professionals will need to develop new skills and adapt to working alongside AI agents, leveraging their insights and capabilities to make more informed decisions and respond to threats more effectively.

AI agents have the potential to revolutionize the field of cybersecurity by enabling organizations to enhance their defenses, bridge the talent gap, and maximize the productivity of their existing workforce. By leveraging the power of machine learning and artificial intelligence, these intelligent systems can help organizations stay ahead of the threat landscape and protect their critical assets from cyber attacks. As the adoption of AI agents in cybersecurity continues to grow, it is clear that they will play an increasingly critical role in securing our digital future.


Follow me on Twitter

KAN!!!!

Artificial intelligence (AI) and large language models (LLMs) have transformed various industries, from healthcare to finance, by enabling machines to understand and generate human-like text. At the heart of many of these models are neural networks, particularly Multi-Layer Perceptrons (MLPs). However, a promising alternative has emerged: Kolmogorov-Arnold Networks (KANs).

What Are Kolmogorov-Arnold Networks (KAN)?

Kolmogorov-Arnold Networks (KANs) are inspired by the Kolmogorov-Arnold representation theorem, a mathematical principle that states any multivariate continuous function can be represented as a composition of continuous functions of a single variable and the operation of addition. Unlike MLPs, which use fixed activation functions on nodes (neurons), KANs employ learnable activation functions on edges (weights), making them fundamentally different in structure and function.

Key Differences Between KANs and MLPs

  1. Learnable Activation Functions: In KANs, activation functions are not fixed but learnable and placed on edges. This flexibility allows KANs to adapt more precisely to the data they are trained on.
  2. No Linear Weights: KANs do not use linear weight matrices. Instead, each weight parameter is replaced by a univariate function parametrized as a spline.
  3. Accuracy and Interpretability: KANs have shown to outperform MLPs in both accuracy and interpretability. Smaller KANs can achieve comparable or better accuracy than larger MLPs and are easier to visualize and understand.

How KANs Can Transform AI and LLMs

Improved Accuracy and Efficiency

KANs have demonstrated superior performance in various tasks, such as data fitting and partial differential equation (PDE) solving. For instance, a smaller KAN can be 100 times more accurate and parameter-efficient compared to a larger MLP. This efficiency translates into faster neural scaling laws, meaning KANs can achieve high performance with fewer computational resources.

Enhanced Interpretability

One of the significant advantages of KANs is their interpretability. KANs can be visualized intuitively, allowing researchers and developers to understand how the network processes information. This transparency is crucial for scientific discoveries, where understanding the underlying mechanisms of the model is as important as its predictive power. KANs have been shown to assist scientists in rediscovering mathematical and physical laws, acting as collaborative tools in research.

Applications in AI and LLMs

  1. Scientific Research: KANs can be used to model complex scientific phenomena accurately, aiding in the discovery and validation of new theories in physics and mathematics.
  2. Natural Language Processing (NLP): In the realm of LLMs, KANs can enhance the ability of models to understand and generate human-like text by providing more accurate and efficient representations of linguistic patterns.
  3. Healthcare: KANs can improve diagnostic models by providing more accurate predictions and better understanding of medical data, leading to improved patient outcomes.

Future Directions and Broader Impacts

The development and adoption of KANs could lead to significant advancements in AI, making models more efficient, accurate, and interpretable. Future research may focus on further improving the scalability of KANs, exploring their applications in various domains, and integrating them with existing AI frameworks to enhance their capabilities.

Kolmogorov-Arnold Networks represent a promising step forward in the field of AI and LLMs. By leveraging the strengths of both splines and MLPs, KANs offer a powerful alternative that could redefine how we approach complex modeling tasks, leading to more intelligent, efficient, and understandable AI systems.


Follow me on Twitter

Based on the Society of Automotive Engineers (SAE) levels of driving automation, adapted to define the levels of autonomy for AI Agents.

Agent Evolution

  1. Level 0 (No Autonomy): The AI agent has no autonomous capabilities and requires human input for all tasks.

  2. Level 1 (Task Assistance): The AI agent can assist with specific tasks, such as data analysis or content generation, but requires human guidance and oversight.

  3. Level 2 (Partial Autonomy): The AI agent can perform some tasks autonomously, such as decision-making in limited domains, but requires human intervention for more complex tasks or in case of uncertainties.

  4. Level 3 (Conditional Autonomy): The AI agent can handle most tasks autonomously within a specific domain, but may request human intervention in edge cases or when faced with situations outside its training scope.

  5. Level 4 (High Autonomy): The AI agent can handle a wide range of tasks autonomously across multiple domains, with minimal human intervention. It can adapt to new situations and learn from its experiences.

  6. Level 5 (Full Autonomy): The AI agent can perform all tasks autonomously, without requiring human intervention. It can continuously learn, improve, and adapt to new situations across all domains.


Follow me on Twitter

Thriving in the Era of 100x Large Language Models

Agent Wrangling

Introduction

As we stand on the precipice of a new era in artificial intelligence, with the next generation of large language models (LLMs) poised to be 100 times more powerful than current models, developers who fail to embrace these transformative tools risk being left behind. The self-deprecation process has already begun for those who hesitate to harness the power of LLMs. In this blog post, we will explore how mastering the art of prompting can help you stay ahead of the curve and thrive in the age of AI-powered knowledge work.

The Prompt: The New Fundamental Unit of Programming

In the world of LLMs, the prompt has emerged as the new fundamental unit of programming. Just as functions and objects have been the building blocks of traditional software development, prompts are now the key to unlocking the potential of AI. By crafting sophisticated prompts and prompt chains, developers can harness the power of LLMs to tackle complex problems and automate knowledge work.

Embracing BAP – Big Ass Prompts

To fully leverage the capabilities of 100x LLMs, developers must think beyond simple, one-off prompts. Enter the concept of BAP – Big Ass Prompts. These comprehensive, carefully designed prompts encapsulate a wealth of domain knowledge and problem-solving strategies. By crafting BAPs, developers can guide LLMs to generate highly targeted and effective solutions to complex challenges.

Mastering the Prompt: The Key to Mastering Knowledge Work

In the era of LLMs, mastering the art of prompting is synonymous with mastering knowledge work. By developing a deep understanding of how to structure prompts, incorporate relevant context, and guide the LLM's output, developers can unlock unprecedented levels of productivity and innovation. Investing time and effort into honing your prompting skills will pay dividends as LLMs continue to evolve and expand their capabilities.

With the advent of larger context windows, spanning up to 1 million tokens, developers can now dump vast amounts of domain-specific knowledge directly into the prompt. This enables LLMs to have access to a wealth of information during the generation process, allowing for more accurate and contextually relevant outputs. By carefully curating and structuring this domain knowledge within the prompt, developers can create highly specialized BAPs that excel at solving specific problems within their field of expertise.

Aggressively Tackling Problems with LLM-Powered Tools and AI Agents

To stay ahead of the curve, developers must actively seek out opportunities to apply LLMs to existing problems. By leveraging LLM-powered tools and AI agents, you can automate repetitive tasks, generate insights from vast amounts of data, and streamline workflows. Embrace an experimental mindset and explore how prompts and prompt chains can be used to create powerful, AI-driven solutions that drive efficiency and innovation.

Conclusion

As we stand on the brink of a new era in artificial intelligence, characterized by 100x more powerful LLMs, developers must adapt and evolve to stay relevant. By mastering the art of prompting, embracing BAPs, and aggressively tackling problems with LLM-powered tools and AI agents, you can position yourself at the forefront of this transformative shift. Remember, the prompt is the new fundamental unit of programming, and those who master it will be the masters of knowledge work in the age of AI.


Follow me on Twitter

TV Time!

Say you have a directory of videos going to stream using OBS. You may want to sync them to the actual time to provide a guide of what is playing and will be coming up. You need to have ffmpeg installed to use ffprobe.

Add the videos to a directory. Then run this little bash script to get the total time in seconds:

for f in ./*.webm
do ffprobe -v quiet -of csv=p=0 -show_entries format=duration "$f"
done | awk '{sum += $1}; END{print sum}'

The above example is only getting times for .webm videos. If you have a variety of video file types can use the wildcard * to get all.


Follow me on Twitter

Nostr Live Streaming!

NOTE: Assumes using a Debian-based OS.

First, install nginx and rtmp module

sudo apt-get install libnginx-mod-rtmp nginx

Edit your /etc/nginx/nginx.conf

Below, change yoursite.org to your site's DNS domain.

Also, change <YOUR-IP-ADDRESS> to your IP will be streaming to the server from (aka home IP).

load_module modules/ngx_rtmp_module.so;
worker_processes auto;
#rtmp_auto_push on;
events {
        worker_connections 1024;
        multi_accept on;
}

rtmp {
        server {
        listen 1935;
        listen [::]:1935 ipv6only=on;
        chunk_size 4096;
        allow publish <YOUR-IP-ADDRESS>;
        deny publish all;

        application live {
            live on;
            hls on;
            hls_path /video/hls;
            record off;
        }
    }
}
http {
    server {
        server_name yoursite.org www.yoursite.org;
        listen 80;

        # Static website
        location / {
        root /var/www/html;
        index index.html;
            }

        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
            }
            root /video;
            add_header Cache-Control no-cache;

            # To avoid issues with cross-domain HTTP requests (e.g. during development)
            add_header Access-Control-Allow-Origin *;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

        }
    }
   server {
        server_name yoursite.org www.yoursite.org;
        listen 443 ssl;
        listen [::]:443 ipv6only=on;

        ssl_certificate /etc/letsencrypt/live/yoursite.org/fullchain.pem; 
        ssl_certificate_key /etc/letsencrypt/live/yoursite.org/privkey.pem; 
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 

        # Static website
        location / {
        root /var/www/html;
        index index.html;
            }
        location /hls {
            types {
                application/vnd.apple.mpegurl m3u8;
            }
            root /video;
            add_header Cache-Control no-cache;

            # To avoid issues with cross-domain HTTP requests (e.g. during development)
            add_header Access-Control-Allow-Origin *;

        }
    }
}

Make sure port 1935 is accessible to with world or at least <YOUR-IP-ADDRESS> provided. Also be sure the path for the stream is valid where in config is now /video.

Be sure also to get a new SSL certificate from LetsEncrypt/Certbot and edit the SSL config portion above.

In OBS, set the stream URL to rtmp://server-IP:1935/live

For the stream key, choose what the .m3u8 will be called, like stream for this example.

Start streaming something in OBS and on a mobile device using a carrier connection open https://yoursite.org/hls/stream.m3u8

If all is working as expected should see the test stream! You are now streaming without using Cloudflare or another walled garden network.


Follow me on Twitter

Add new service file to /etc/systemd/system/strfry-relayable.service

Add options, change user and paths as needed.

[Unit]
Description=strfry stream up to relayable

[Service]
User=strfry
ExecStart=/usr/local/bin/strfry stream wss://relayable.org --dir up
Restart=on-failure
RestartSec=5
ProtectHome=yes
NoNewPrivileges=yes
ProtectSystem=full
LimitCORE=524288

[Install]
WantedBy=multi-user.target

Enable it to start automatically on reboot: sudo systemctl enable strfry-relayable.service

Start service: sudo systemctl start strfry-relayable

Check Status: sudo systemctl status strfry-relayable

#V4V #nostr #relayable #relay #strfry


Follow me on Twitter

Filter Nginx Logs

Tired of your nginx logs filling up with health checks of other user agents you do not need to log?

This example assumes you use Pingdom, Amazon-Route53, and UptimeRobot health checks. Change or add to fit your needs.

map $http_user_agent $log_ua {

~Pingdom 0;
~Amazon-Route53 0;
~UptimeRobot 0;

default 1;

}

server {

...

access_log /var/log/nginx/access.log combined if=$log_ua;

}

Follow me on Twitter

Ventoy ISO USB EDC

One of the things that are always on my key chain is a bootable USB key with Ventoy and multiple ISOs or IMGs. I often use this for everything from installing Linux or Windows (if I have to). To disk imaging, pentesting, network troubleshooting, and digital forensics.

Here is a list of must-have ISOs/IMGs to always have in your pocket:

You can also add any Linux distros or other tools you'd like by dragging them into the Ventoy drive.


Follow me on Twitter