new article or whatever

This commit is contained in:
2005 2024-09-03 16:40:51 +02:00
parent e18668bc1e
commit ef62dc2c55
36 changed files with 429 additions and 1090 deletions

View file

@ -165,8 +165,8 @@ h6::before { color: var(--maincolor); content: '###### '; }
}
.toc {
background-color: #ececec;
color: #232333;
background-color: #191830;
color: #FFF;
padding: 10px;
padding-bottom: 0;
border-radius: 5px;
@ -178,7 +178,6 @@ footer {
display: flex;
align-items: center;
border-top: 0.4rem dotted var(--bordercl);
padding: 2rem 0rem;
margin-top: 2rem;
}
.soc {

Binary file not shown.

Before

Width:  |  Height:  |  Size: 477 KiB

View file

@ -1,7 +1,6 @@
---
title: "About me"
date: 2024-04-28
edited: 2024-07-15
date: 2024-08-15
draft: false
---
@ -9,15 +8,14 @@ draft: false
Hi there. I see you somehow stumbled across my site.
My name is 4o1x5 or Máté I am a "CS" student from Hungary. I mostly specialize in backend development but also do frontend on rare occasions. I am a privacy and libre/open-source advocate. I am an avid linux ~~abuser~~ user with a passion towards [Nixos](https://nixos.org).
I know a lot about rust, javascript and some protocols like HTTP and i2p. I have my own hardware that I run my own projects, instances on. (_homelab_)
I try to spend my time on anything that benefits me over time and builds character. Meaning I read books, learn by reinforcement learning and build knowledge by real life examples. I speak two languages, Hungarian and English. I was also learning french a year ago but that got left behind.
With no career yet I have the freedom of trying out many things before I go out to the real world to work. I experimented around a lot of stuff in technology. I tried UI/UX development, algorithms, data-science, machine learning and so on... I have come to a realization that _fullstack development_ is what I really was interested as it includes many technologies that I already know and want to learn. But regardless the earlier mentioned topics still interest me but not to that high degree. Creating is one of my passions.
I'm fond of many music genres, and I might be the best example of that guy that _listens to anything_. Ranging from [uptempo hardcore](https://soundcloud.com/xn88ax/dsordr-btcrushd-iii-l9), [experimental bass](https://soundcloud.com/onetruegod/one-true-god-heaven), [classical/dubstep](https://www.youtube.com/watch?v=5BzgNBn786o), [noise](https://fine-sir-1584660650.bandcamp.com/track/real-music) [electronic](https://www.youtube.com/watch?v=NLi2v-Gq-5A), [breakcore](https://www.youtube.com/watch?v=btefjNXeaYg), [rock](https://youtu.be/DZyYapMZSec?si=qR5b56Y97YSoFtft&t=240), [industrial metal](https://www.youtube.com/watch?v=z0wK6s-6cbo), [electronic rock](https://youtu.be/yVsr9U50f8c?si=XuQscjSd74dOWhqV&t=48) [phonk](https://soundcloud.com/prodberto/mid-day-midnight-remix), brazilian phonk, [r&b](https://www.youtube.com/watch?v=u9n7Cw-4_HQ), [hip hop](https://www.youtube.com/watch?v=tnVAEAo7nvA) [electro house](https://soundcloud.com/geoxor/dead), [french drill](https://www.youtube.com/watch?v=cojoYPRcIJA), [uk drill](https://www.youtube.com/watch?v=-qO2ED-l1xw), [jazz rap](https://www.youtube.com/watch?v=J87pJrxvJ5E), [electronic](https://youtu.be/KTOgfHb8dZk?si=cExGLo-OhMV-d6Le&t=41), [hungarian](https://www.youtube.com/watch?v=WMaW8y3-af8), [pop](https://youtu.be/ceLyMb0MGLE?si=2R3mlg7qIXhflOax&t=269), [hardstyle](https://www.youtube.com/watch?v=ReI1IKl554k), [pop-rap](https://www.youtube.com/watch?v=m4_9TFeMfJE), [synthwave](https://www.youtube.com/watch?v=uVtgQX4Y11s), [darksynth](https://youtu.be/oe1wA1hAdd0?si=CJQ6OuGdboL45pFO&t=71) (absolute banger to this day).
^^ click the links to get redirected to my fav in that genre
I most likely have a favorite in every genre. It's one of my superpowers as I have always found a way to connect with people using music.
## Wanna chat?
# Open for discussion
I am open for any type of conversation, if you feel like we could get along send me a message, I'll most likely respond.
[**Add me on matrix**](https://matrix.to/#/@4o1x5:4o1x5.dev)

View file

Before

Width:  |  Height:  |  Size: 975 KiB

After

Width:  |  Height:  |  Size: 975 KiB

View file

Before

Width:  |  Height:  |  Size: 1.3 MiB

After

Width:  |  Height:  |  Size: 1.3 MiB

View file

Before

Width:  |  Height:  |  Size: 2.1 MiB

After

Width:  |  Height:  |  Size: 2.1 MiB

View file

Before

Width:  |  Height:  |  Size: 163 KiB

After

Width:  |  Height:  |  Size: 163 KiB

View file

Before

Width:  |  Height:  |  Size: 161 KiB

After

Width:  |  Height:  |  Size: 161 KiB

View file

@ -51,8 +51,6 @@ Here is a video of how it looks like in it's _fullscreen_ mode:
### Finamp _mobile_
Finamp is an open-source jellyfin music client for Android.
It's one of the most feature rich clients out there, supporting many features also found in the mobile Spotify client.
I don't really like it's design as It doesn't really look like Spotify but that probably due to their focus not being on that. Regardless Its clean.
| Song focus | List of songs |
| :------------------: | :-------------------: |
| ![](finamp_song.png) | ![](finamp_songs.png) |

View file

Before

Width:  |  Height:  |  Size: 73 KiB

After

Width:  |  Height:  |  Size: 73 KiB

View file

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 5.6 KiB

View file

Before

Width:  |  Height:  |  Size: 78 KiB

After

Width:  |  Height:  |  Size: 78 KiB

View file

@ -1,229 +0,0 @@
---
title: Minek a spotify?
description: Jellyfin a self-hosted spotify alternatíva.
date: 2024-07-17 07:00:00+0000
image: feishin.png
categories:
- Nix
- Piracy
tags:
- Jellyfin
- Music
- Homelab
- Selfhost
draft: true
writingTime: "1h 40m"
---
## Bevezető
Már 2024 elején lemondtam az utolsó előfizetésemet, a Spotify-t. Bár mértéktelen zenét hallgatok a nap 90 százalékában nem volt egyáltalán nehéz átálnom. A streamelés helyett inkább letöltöm a zenét (ami amúgy a 2000-es évekre emlékeztet de még meglepően népszerű) és a telefonom és gépem között szinkronizálom és úgy hallgatom. Vagy legalábbis így tettem 6-7 hónapig amiután visszaáltam a streamelésre de úgy ahogy gondolnád. Spotify, deezer és Tidal helyett Jellyfin-t használom és a saját szerveremről hallgatom az interneten keresztül.
## Letölteni, micsoda?
Igen, meglepő nem? Hihetetlen sok ember van még aki ~~lopja~~ kölcsönzi a zenét. Van is erre egy egész peer-to-peer platform amit [Soulseek-nek](https://en.wikipedia.org/wiki/Soulseek) hívnak. Én is innen szerzem a zenéim többségét.
My main reason is privacy. I have yet to find a platform that respects my privacy and allows me to buy any kind of albums/songs with Bitcoin or Monero, therefore my last resort is to pirate. I could buy the original CD's but most artists I listen to don't have one, and I loath ripping.
Everyone has their own opinion about piracy, and I think in my case it's a 100% justified.
## Jellyfin
_Jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. It is an alternative to the proprietary Emby and Plex, to provide media from a dedicated server to end-user devices via multiple apps._
Jellyfin is a really great mediaserver, It has many clients and supports almost any kind of media, ranging from visual to audible.
I have been using it for about two years now, but recently I just realized I could stream my music from there instead of syncing my music to all my devices. For the past few days It has been a pleasureful experience and I have no complaints.
### Feishin _desktop_
Feishin is a rewrite of sonixd, a client for subsonic.
It's a really clean client for jellyfin, I found it in nixpkgs and I use it daily.
It almost all the features I need and a modern spotify-like UI.
Sadly it's missing a download feature, meaning I cannot download music then listen to it on the road. Streaming flacs is really inefficient if you are using mobile data or some public wifi. I travel a lot meaning this is kind of a deal breaker, but I can just use my phone.
Here is a video of how it looks like in it's _fullscreen_ mode:
<video src="./feishin.mp4" width="100%" height="100%" controls></video>
![](fieshin_front_page.png)
### Finamp _mobile_
Finamp is an open-source jellyfin music client for Android.
It's one of the most feature rich clients out there, supporting many features also found in the mobile Spotify client.
I don't really like it's design as It doesn't really look like Spotify but that probably due to their focus not being on that. Regardless Its clean.
| Song focus | List of songs |
| :------------------: | :-------------------: |
| ![](finamp_song.png) | ![](finamp_songs.png) |
## Self-hosting
### Define a domain for the nix server
```nix
networking.domain = "example.com";
```
### Jellyfin
Nixpkgs has jellyfin options so we can deploy it that way. It's really straight-forward and seamless.
This example also includes a nginx configuration, since I'm assuming you want to access it from remote locations and maybe share it with friends.
```nix
{ pkgs, config, ... }: {
services.jellyfin = {
enable = true;
# Setting the home directory
# might need to create with mkdir /home/jellyfin
dataDir = "/home/jellyfin";
};
services.nginx = {
virtualHosts = {
"jelly.${config.networking.domain}" = {
forceSSL = true;
enableACME = true;
locations."/" = {
extraConfig = ''
proxy_pass http://localhost:8096;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;
'';
};
# Enable socket for features like syncPlay and live view in the admin panel
locations."/socket" = {
extraConfig = ''
proxy_pass http://localhost:8096;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;
'';
};
};
};
};
}
```
After this you can head to `jelly.yourdomain.com` and run the setup.
Keep in mind this is a really basic setup, all data will be stored at `/home/jellyfin` including the SQLite database.
### slskd
slskd is a fully featured modern client-server client for soulseek. You can log in to it's panel and download files from the network.
I had some problems with permissions when I was trying to use the native version from _nixpkgs_, so instead I just made an oci-container that uses linux namespaces to run applications.
**Create directories**
```bash
mkdir /home/jellyfin/Music
mkdir /home/jellyfin/Music/unsorted
```
** Create the service**
Don't forget to change the credentials like your soulseek username and password. And also the slskd login username and password.
```nix
{ pkgs, config, ... }: {
# Define the slskd container
virtualisation.oci-containers.containers = {
slskd = {
image = "slskd/slskd";
ports = [
"5030:5030" # panel
"50300:50300" # soulseek
];
volumes = [
# you can use picard or similar applications to sort them, that's why I link it to unsorted
"/home/jellyfin/Music/unsorted:/downloads"
"/home/jellyfin/Music:/music"
];
environment = {
SLSKD_SHARED_DIR = "/music";
SLSKD_DOWNLOADS_DIR = "/downloads";
# these will be used to login to slskd at `soulseek.example.com`
SLSKD_USERNAME = "slskd username";
SLSKD_PASSWORD = "slskd password";
# This is your soulseek login, if you don't have an account don't worry, just type in anything here and it will create an account.
SLSKD_SLSK_USERNAME = "soulseek login name";
SLSKD_SLSK_PASSWORD = "soulseek login name";
SLSKD_SLSK_LISTEN_IP_ADDRESS = "0.0.0.0";
};
};
};
# Open the ports for slskd so users can download from you
networking.firewall = {
enable = true;
allowedTCPPorts = [
50300
];
allowedUDPPorts = [
50300
];
};
services.nginx = {
virtualHosts = {
"soulseek.${config.networking.domain}" = {
forceSSL = true;
enableACME = true;
locations."/" = {
extraConfig = ''
proxy_pass http://localhost:5030;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_request_buffering off;
'';
};
};
};
};
}
```
Don't forget to open up the ports on your router so that users can download from you!
## Usage
### Create a music library for jellyfin
Head to the defined jellyfin domain, and log in with the credentials you have given.
On the top right click on the profile icon and head to `Dashboard` -> `Libraries` -> `Add Media Library` and add a new music library like in the picture.
![](jellyfin_music_library.png)
### Download music
Head to the soulseek domain you have defined, and log into `slskd`.
Slskd is really easy to use, upon entering you are greeted with the search bar, type in anything and hit enter. Slskd will search other peers on the network for the string and return files.
![](slskd_search.png)
![](slskd_search_result.png)
After hitting download slskd will put it in `/home/jellyfin/Music/unsorted`.
Jellyfin will automatically scan you library every 3 hours bot you can override this by clicking `Scan All Libraries` in the `Dashboard`
### Connect a client.
Download a client, I recommend the ones I mentioned and connect to your server.
For the server URL type in `jelly.yourdomain.com`.
### Enjoy
<video src="./Replay_2024-07-15_20-43-44.mp4" width="100%" height="100%" controls></video>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 126 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

View file

@ -1,392 +0,0 @@
---
title: Monitor instances
description: Export metrics, collect them, visualize them.
date: 2024-05-17 07:00:00+0000
image: chris-yang-1tnS_BVy9Jk-unsplash.jpg
categories:
- Nix
- Guide
- Sysadmin
- Monitoring
tags:
- Nix
- Nginx
- Prometheus
- Exporters
- Monitoring
- Docker compose
draft: false
writingTime: "20m"
---
# Monitoring
Monitoring your instances allow you to keep track of servers load and its health overtime. Even looking at the stats once a day can make a huge difference as it allows you to prevent catastrophic disasters before they even happen.
I have been monitoring my servers with this method for years and I had many cases I was grateful for setting it all up.
In this small article I have included two guides to set these services up. First is with [NixOs](#nixos) and I also explain with [docker-compose](#docker-compose) but it's very sore as the main focus of this article is NixOS.
![Made with Excalidraw](graph1.png)
**Prometheus**
Prometheus is an open-source monitoring system. It helps to track, collect, and analyze
metrics from various applications and infrastructure components. It collects metrics from other software called _exporters_ that server a HTTP endpoint that return data in the prometheus data format.
Here is an example from `node-exporter`
```nix
# curl http://localhost:9100
# HELP node_cpu_seconds_total Seconds the CPUs spent in each mode.
# TYPE node_cpu_seconds_total counter
node_cpu_seconds_total{cpu="0",mode="idle"} 2.54196405e+06
node_cpu_seconds_total{cpu="0",mode="iowait"} 4213.44
node_cpu_seconds_total{cpu="0",mode="irq"} 0
node_cpu_seconds_total{cpu="0",mode="nice"} 0.06
node_cpu_seconds_total{cpu="0",mode="softirq"} 743.4
...
```
**Grafana**
Grafana is an open-source data visualization and monitoring platform. It has hundreds of features embedded that can help you query from data sources like Prometheus, InfluxDB, MySQL and so on...
## NixOs
Nix makes it trivial to set up these services, as there are already predefined options for it in nixpkgs. I will give you example configuration files below that you can just copy and paste.
I have a guide on [remote deployment](/p/remote-deployments-on-nixos/) for NixOs, below you can see an example on a folder structure you can use to deploy the services.
{{< filetree/container >}}
{{< filetree/folder name="server1" state="closed" >}}
{{< filetree/folder name="services" state="closed" >}}
{{< filetree/file name="some-service.nix" >}}
{{< filetree/folder name="monitoring" state="closed" >}}
{{< filetree/file name="prometheus.nix" >}}
{{< filetree/file name="grafana.nix" >}}
{{< filetree/folder name="exporters" state="closed" >}}
{{< filetree/file name="node.nix" >}}
{{< filetree/file name="smartctl.nix" >}}
{{< /filetree/folder >}}
{{< /filetree/folder >}}
{{< /filetree/folder >}}
{{< filetree/file name="configuration.nix" >}}
{{< filetree/file name="flake.nix" >}}
{{< filetree/file name="flake.lock" >}}
{{< /filetree/folder >}}
{{< /filetree/container >}}
### Exporters
First is node-exporter. It exports all kind of system metrics ranging from cpu usage, load average and even systemd service count.
#### Node-exporter
```nix
# /services/monitoring/exporters/node.nix
{ pkgs, ... }: {
services.prometheus.exporters.node = {
enable = true;
#port = 9001; #default is 9100
enabledCollectors = [ "systemd" ];
};
}
```
#### Smartctl
Smartctl is a tool included in the smartmontools package. It is a collection of monitoring tools for hard-drives, SSDs and filesystems.
This exporter enables you to check up on the health of your drive(s). And it will also give you a wall notifications if one of your drives has a bad sector(s), which mainly suggests it's dying off.
```nix
# /services/monitoring/exporters/smartctl.nix
{ pkgs, ... }: {
# exporter
services.prometheus.exporters.smartctl = {
enable = true;
devices = [ "/dev/sda" ];
};
# for wall notifications
services.smartd = {
enable = true;
notifications.wall.enable = true;
devices = [
{
device = "/dev/sda";
}
];
};
}
```
If you happen to have other drives you can just use `lsblk` to check their paths
```bash
nix-shell -p util-linux --command lsblk
```
For example here is my pc's drives
```
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 0B 0 disk
nvme1n1 259:0 0 476,9G 0 disk
├─nvme1n1p1 259:1 0 512M 0 part /boot
├─nvme1n1p2 259:2 0 467,6G 0 part
│ └─luks-bbb8e429-bee1-4b5e-8ce8-c54f5f4f29a2
│ 254:0 0 467,6G 0 crypt /nix/store
│ /
└─nvme1n1p3 259:3 0 8,8G 0 part
└─luks-f7e86dde-55a5-4306-a7c2-cf2d93c9ee0b
254:1 0 8,8G 0 crypt [SWAP]
nvme0n1 259:4 0 931,5G 0 disk /mnt/data
```
### Prometheus
Now that we have setup these two exporters we need to somehow collect their metrics.
Here is a config file for prometheus, with the scrape configs already written down.
```nix
# /services/monitoring/prometheus.nix
{pkgs, config, ... }:{
services.prometheus = {
enable = true;
scrapeConfigs = [
{
job_name = "node";
scrape_interval = "5s";
static_configs = [
{
targets = [ "localhost:${toString config.services.prometheus.exporters.node.port}" ];
labels = { alias = "node.server1.local"; };
}
];
}
{
job_name = "smartctl";
scrape_interval = "5s";
static_configs = [
{
targets = [ "localhost:${toString config.services.prometheus.exporters.smartctl.port}" ];
labels = { alias = "smartctl.server1.local"; };
}
];
}
];
};
}
```
I recommend setting the 5s delay to a bigger number if you have little storage as you can imagine it can generate a lot of data.
~16kB average per scrape (node-exporter). 1 day has 86400 seconds, divide that by 5 thats 17280 scrapes a day.
17280 \* 16 = 276480 kB. Thats 270 megabytes a day. And if you have multiple servers that causes X times as much.
30 days of scarping is about 8 gigabytes (1x). **But remember, by default prometheus stores data for 30 days!**
### Grafana
Now let's get onto gettin' a sexy dashboard like this. First we gotta setup grafana.
![Node exporter full (id 1860)](20240518_1958.png)
```nix
# /services/monitoring/grafana.nix
{ pkgs, config, ... }:
let
grafanaPort = 3000;
in
{
services.grafana = {
enable = true;
settings.server = {
http_port = grafanaPort;
http_addr = "0.0.0.0";
};
provision = {
enable = true;
datasources.settings.datasources = [
{
name = "prometheus";
type = "prometheus";
url = "http://127.0.0.1:${toString config.services.prometheus.port}";
isDefault = true;
}
];
};
};
networking.firewall = {
allowedTCPPorts = [ grafanaPort ];
allowedUDPPorts = [ grafanaPort ];
};
}
```
If you want to access it via the internet, change the following:
- `http_addr = "127.0.0.1"`
- remove the firewall allowed ports
This insures data will only flow thru the nginx reverse proxy
Remember to set `networking.domain = "example.com"` to your domain.
```nix
# /services/nginx.nix
{ pkgs, config, ... }:
let
url = "http://127.0.0.1:${toString config.services.grafana.settings.server.http_port}";
in {
services.nginx = {
enable = true;
virtualHosts = {
"grafana.${config.networking.domain}" = {
# Auto cert by let's encrypt
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = url;
extraConfig = "proxy_set_header Host $host;";
};
locations."/api" = {
extraConfig = ''
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
'';
proxyPass = url;
};
};
};
};
# enable 80 and 443 ports for nginx
networking.firewall = {
enable = true;
allowedTCPPorts = [
443
80
];
allowedUDPPorts = [
443
80
];
};
}
```
### Log in
The default user is `admin` and password is `admin`. Grafana will ask you to change it upon logging-in!
### Add the dashboards
For node-exporter you can go to dashboards --> new --> import --> paste in `1860`
Now you can see all the metrics of all your server(s).
## Docker-compose
{{< filetree/container >}}
{{< filetree/folder name="monitoring-project" state="closed" >}}
{{< filetree/file name="docker-compose.yml" >}}
{{< filetree/file name="prometheus.nix" >}}
{{< /filetree/folder >}}
{{< /filetree/container >}}
### Compose project
I did not include a reverse proxy, neither smartctl as I forgot how to actually do it, that's how long I've been using nix :/
```yaml
# docker-compose.yml
version: "3.8"
networks:
monitoring:
driver: bridge
volumes:
prometheus_data: {}
services:
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
hostname: node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- "--path.procfs=/host/proc"
- "--path.rootfs=/rootfs"
- "--path.sysfs=/host/sys"
- "--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)"
networks:
- monitoring
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
hostname: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--web.console.libraries=/etc/prometheus/console_libraries"
- "--web.console.templates=/etc/prometheus/consoles"
- "--web.enable-lifecycle"
networks:
- monitoring
grafana:
image: grafana/grafana:latest
container_name: grafana
networks:
- monitoring
restart: unless-stopped
ports:
- '3000:3000'
```
```yaml
# ./prometheus.yml
global:
scrape_interval: 5s
scrape_configs:
- job_name: "node"
static_configs:
- targets: ["node-exporter:9100"]
```
```bash
docker compose up -d
```
### Setup prometheus as data source inside grafana
Head to Connections --> Data sources --> Add new data source --> Prometheus
Type in http://prometheus:9090 as the URL, on the bottom click `Save & test`.
Now you can add the dashboards, [explained in this section](#add-the-dashboards)
Photo by <a href="https://unsplash.com/@chrisyangchrisfilm?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Chris Yang</a> on <a href="https://unsplash.com/photos/silhouette-photography-of-man-1tnS_BVy9Jk?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 10 KiB

View file

@ -1,248 +0,0 @@
---
#title: Getting started with remote deployment on NixOs
title: Remote deployments on NixOs
description: A quick and dirty guide to get started with building systems to remote instances
date: 2024-05-04 00:00:00+0000
image: taylor-vick-M5tzZtFCOfs-unsplash.jpg
categories:
- Nix
- Guide
- Sysadmin
tags:
- Nix
- NixOs
- Server Management
draft: true
---
With the capabilities of Nix & Nixos, we can tailor-make services on our local computer,
build the system, and then transmit it to a remote server using the `--target-host` argument for `nixos-rebuild`
command. This is an efficient method of deploying services to distant servers because you
don't have to connect to the machine via SSH and set up the files locally before building
them there.
## How remote deployments work
![](diagram1.svg)
NixOS allows for a seamless build process, where you can create the system on your local computer and then use
SSH to transfer the configuration files to a remote machine where the services are deployed.
This process is easy to manage and streamline. Ez to learn!
## Getting started
First you will need a machine running NixOs. Duh. But also you will need to have root access to it in some way.
If you have the root user and its password then you are set to go and can skip to the section where [I explain how to get ready for remote deployment](#first-boot)
### Setting up NixOs on a server.
#### Installing
I assume if you are reading this article you know how to install an operating system.
You will need to flash an usb drive with NixOs, for the sake of an easy install I will use `latest-nixos-plasma5-x86_64-linux.iso` since It comes with a fearly easy to use Calamares installer.
![Installer](nixosinstaller.png)
You can go ahead and click through it and it will install.
## First boot
For remote deployment to work you will need to enable ssh and configure some parameters for it to work.
We are enabling root login and also sftp for file transfer.
```nix
# configuration.nix
services.openssh = {
enable = true;
allowSFTP = true;
settings = {
PasswordAuthentication = true;
PermitRootLogin = "yes";
};
};
networking.hostName = "server1";
networking.domain = "localhost";
```
**Rebuild the system**
```zsh
sudo nixos-rebuild switch
```
## Setting up key auth
To facilitate easy deployments, you can transfer your public SSH key to the remote machine, allowing you to log
in without having to enter the password for each rebuild. This method is both more convenient and safe
```bash
ssh-copy-id root@server
```
## Create a directory for the remote
On your local machine, it's important to organize your files into a directory, particularly when working with
multiple servers. This is demonstrated below, where I showcase my personal file
organization method.
{{< filetree/container >}}
{{< filetree/folder name="servers" >}}
{{< filetree/folder name="server1" state="closed" >}}
{{< filetree/file name="configuration.nix" >}}
{{< filetree/file name="flake.nix" >}}
{{< filetree/file name="flake.lock" >}}
{{< /filetree/folder >}}
{{< filetree/folder name="server2" state="closed" >}}
{{< filetree/file name="configuration.nix" >}}
{{< filetree/folder name="services" state="closed" >}}
{{< filetree/file name="matrix.nix" >}}
{{< filetree/file name="webserver.nix" >}}
{{< /filetree/folder >}}
{{< filetree/file name="flake.nix" >}}
{{< filetree/file name="flake.lock" >}}
{{< /filetree/folder >}}
{{< /filetree/folder >}}
{{< /filetree/container >}}
## Copying essential files from remote
NixOS does not support partial builds, so you will need to transfer all the necessary files from `/etc/nixos` to
your local machine. This includes files such as `hardware-configuration.nix` and `configuration.nix`.
```bash
scp root@server:/etc/nixos/configuration.nix configuration.nix
```
```bash
scp root@server:/etc/nixos/hardware-configuration.nix hardware-configuration.nix
```
## Create flakes
You can create a file called `flake.nix` or use `nix` to do so.
```bash
nix flake init .
```
Paste the following in the file.
```nix
# flake.nix
{
description = "Server1 deployments";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-23.11";
};
outputs =
{ self
, nixpkgs
, ...
}:
let
system = "x86_64-linux";
in
{
nixosConfigurations.server1 = nixpkgs.lib.nixosSystem {
inherit system;
modules = [
./configuration.nix
];
};
};
}
```
Here is an example of how your `configuration.nix` should look like.
```nix
# configuration.nix
{ config, pkgs, ... }:
{
imports =
[
./hardware-configuration.nix
];
networking.hostName = "server1";
networking.domain = "example.com";
services.openssh = {
enable = true;
allowSFTP = true;
settings = {
PasswordAuthentication = true;
PermitRootLogin = "yes";
};
};
}
```
## Set up a service
This little example shows a dummy nginx service.
```nix
# nginx.nix
services.nginx = {
enable = true;
virtualHosts = {
"example.com" = {
locations."/" = {
# ...
};
};
};
}
```
Insert it in the configuration.nix `imports` section.
```nix
imports = [
./hardware-configuration.nix
./nginx.nix
];
```
## Deploy!
After all this we can just go ahead and execute a rebuild like we would do on our local machine. Except in this case we have to add the `--flake` to the rebuild command and also add the hostname we are building for.
```bash
nixos-rebuild switch --flake .#server1 --target-host root@server --show-trace
```
## Service management
Since NixOs uses systemd we can utilize its tools such as `journalctl` or `systemcl` to check up on how our services are doing.
Here are a few commands I recommend using
Prints the last 100 logs of nginx
```bash
journalctl -u nginx.service -n 200
```
Displays the status of the service
```bash
systemctl status nginx.service
```
## Afterthoughts
The first-boot section could be skipped if you [create a custom nixos installation media](https://wiki.nixos.org/wiki/Creating_a_NixOS_live_CD) then flash that to the server. With a custom media you can define ssh to already have these options enabled and also can add your public key.
This is how I've been doing my deployments for the past 1 month for the 4 of my servers. It's much easier than my old-school method of ssh-ing into my alpine machines and manage my deployments with `docker-compose`

Binary file not shown.

Before

Width:  |  Height:  |  Size: 526 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 MiB

View file

@ -1,83 +0,0 @@
---
title: Setting up a SSG on NixOs with nginx
description: Using hugo we can compile a project into a static site which can be later served with Nginx
date: 2024-04-24 00:00:00+0000
image: daniele-levis-pelusi-YKsqkazGaOw-unsplash.jpg
categories:
- Nix
- Guide
- Sysadmin
tags:
- Nix
- Nginx
- Short
- Hugo
draft: true
---
## Overview
After conducting research and finding insufficient guidance online on how to properly set up a Hugo site
with nginx, I decided to create my own guide. I will provide you with step-by-step instructions on how to
compile your Hugo project into a static site, which can then be served through nginx's `root` option.
## Defining the derivation to build the site
```nix
# /website/default.nix
{ pkgs }:
pkgs.stdenv.mkDerivation rec {
name = "website";
version = "0.1.0";
src = /home/user/website;
buildInputs = with pkgs; [ hugo ];
dontConfigure = true;
buildPhase = ''
cp -r $src/* .
${pkgs.hugo}/bin/hugo
'';
installPhase = ''
mkdir -p $out
cp -r public/* $out/
'';
}
```
## Setting up Nginx
```nix
# /services/nginx.nix
{...}: {
services.nginx = {
enable = true;
virtualHosts = {
"example.com" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# Relative path to current file
root = pkgs.callPackage ../website/default.nix { };
};
# setting error page as hugo error page
extraConfig = ''
error_page 404 /404.html;
'';
};
};
};
}
```
## Rebuild the system
After importing this module to your `flake.nix`/`configuration.nix` you can rebuild the system and see that the site is up!
```bash
sudo nixos-rebuild switch
```

View file

@ -1,80 +0,0 @@
---
title: Egy SSG telepítése NixOs-en
description: A Hugo segítségével egy statikus oldalt tudunk létrehozni amit a későbbiekben Nginx-el tudunk a világhálóra szolgálni
date: 2024-04-24 00:00:00+0000
image: daniele-levis-pelusi-YKsqkazGaOw-unsplash.jpg
categories:
- Nix
- Útmutató
tags:
- Nix
- Nginx
- Rövid
- Hugo
draft: true
---
## Bevezető
Az interneten keresgélve nem találtam nekem megfelelő útmotatót arra hogy hogyan lehet egy Hugo weboldalat NixOs-en felrakni ezért írtam rá egy rövid építőt. Gondoltam gyors belefoglalom egy rövid blog-ba hátha valaki hasznosnak találja majd.
## Deriváció a hugo oldalnak
```nix
# /weboldal/default.nix
{ pkgs }:
pkgs.stdenv.mkDerivation rec {
name = "weboldal";
version = "0.1.0";
src = /home/user/website;
buildInputs = with pkgs; [ hugo ];
dontConfigure = true;
buildPhase = ''
cp -r $src/* .
${pkgs.hugo}/bin/hugo
'';
installPhase = ''
mkdir -p $out
cp -r public/* $out/
'';
}
```
## Nginx definiálása
```nix
# /szolgaltatosk/nginx.nix
{...}: {
services.nginx = {
enable = true;
virtualHosts = {
"weboldal.hu" = {
forceSSL = true;
enableACME = true;
locations."/" = {
# A relatív helye a weboldal derivációjának a jelenlegi fájlhoz képest
root = pkgs.callPackage ../weboldal/default.nix { };
};
# beállítás hogy hugonak a 404ét kapjuk az nginx helyett
extraConfig = ''
error_page 404 /404.html;
'';
};
};
};
}
```
## Újraépítés
Miután beimportáltad az nginx fájlt a`flake.nix` vagy a `configuration.nix`-be újjáépítheted az operációs rendszert és már futni is fog a weboldal
```bash
sudo nixos-rebuild switch
```

View file

@ -9,25 +9,26 @@ tags:
- Fuck around and find out
- Scammers
draft: false
toc: true
---
A year ago I got a scam SMS from supposedly the Hungarian Law enforcement saying that I have some payments due and If I don't pay it in time, it will result in legal consequences.
Obviously someone with common senses and a bit of knowledge about the internet as a whole will know that this is likely a scam.
I could have ignored this and went on by my day but then I remembered that my parents always keep asking if what they have received on facebook is a scam or not... Now thankfully my parents have me, therefore they can verify with me if what the person behind those messages is promising really will really happen. But most people are foolish enough to fall into these kinds of traps. Elderly people, those who don't quite understand the shadiness of these and people in the autism spectrum (like my close relatives). And therefore I have decided to flood their servers with fake data.
### Building my first spamming script
## Building my first spamming script
First I have conducted a basic research. I went to my PC and opened up the developer console in Firefox and entered some fake data into the input fields. Then I analyzed all the requests the website was sending to the server and from that point I got a clue of how to build a basic script that could send millions of fake records.
Talking about fake. I was smart and decided not to spam random data into every field. Instead I choose to fuck with them even more by sending real looking data. This way they have a much harder time deciding what data is real and what is not.
Since the target audience of these assholes were Hungarians I went and got a h u g e list of first and last names and used those databases to generate real looking names. I also used some python libraries to generate all kinds of credit cards. Visa, mastercard you name it. And after about an hour I got a fully working script that could send thousands of record a second to their database.
#### Running the script
### Running the script
I knew I would get blocked really fast. And I was right, after about 2 000 requests I got IP blocked.
Funny because I wasn't having enough fun yet. I decided to call up a few of my friends and got them to download my script and run it in the background. I also decided to set it up on my VPS and after all that I was spamming them from about 12 IP addresses.
Long story short, after 2 weeks of basically ddos-ing them they took their website down. I win.
### A more sophisticated attack
## A more sophisticated attack
Today I've seen a mutual on a social media platform post a screenshot of them receiving a scam SMS from the hungarian police. Shiver my timbers I must pay them my whole life savings so they don't come and arrest me! Anyways... I have decided to look into that scam as well and have found my self in a bittersweet position. These attackers were using some payment processor behind their servers that would automatically charge the victim a certain amount of money upon entering their credit card details. Now how do I know that you might ask? I don't, It's my best guess.
Now the evidence leading up to that comes like this:
@ -70,7 +71,7 @@ I copied the whole telegram URL that contained their api key and opened up [Inso
After this I started writing my code.
#### The program
### The program
Now this was more complicated, since I would have two factors to align next to each other to convince those idiots that they were really getting people to give away their information.
I wrote a basic rust program that would spawn 1200 threads each with timers at random x seconds that would send random card details to their backend, wait a little bit and then send a dummy text to their telegram channel so they would think they are receiving confirmation of actual codes coming thru.
@ -90,7 +91,7 @@ I made some errors while spawning threads and it resulted in not sending any req
Since all my friends use windows I had to compile my code to an `exe` so they could have a chance running it.
Eventually I got a few of them to run it but not as much as the first time. In about 1-3 minutes they would get blocked or get an 404 error. These scammers were active and blocked anyone that would start spamming. In times like this I wish I had a botnet. I lost. But I managed to at least send them a few hundred records.
#### Lessons
## Lessons
I won't say I'm a genius, anyone with a basic knowledge of HTTP, API's and a bit of programming could do all this magic. What I'm trying to say is that these people really think they could get away with all this. And they really do sometimes. They are probably smart enough to buy their servers with some untraceable cryptocurrency and then deploy their skimming services with fake names. There isn't really a way to stop them. But there is a way to mitigate their impact by sending them a lot of fake data :)

View file

@ -0,0 +1,213 @@
---
title: Package rust workspace project into containers with nix
description: A little technique I built up with actions and nix
date: 2024-09-03 00:00:00+0000
draft: false
toc: true
---
One of my projects that I have been working on for months is based on microservices. Meaning I have a bunch of programs that need to be containerized in order to run it in a kubernetes cluster.
I have tried building the images with the generic `dockerfile` but it resulted in unbelievably large images. One microservice was a 35MB binary when I compiled it, but in the other hand the docker image I made was almost 10x as much (340MB). Not only was it inefficient but I could have wasted time optimizing the images. But then I realized that I've been using nix for a few months, it can make containers right? Hell yeah it can! And It does it neatly packed. The 35MB binary packaged in a container comes out to be 37MB and I had to do 0 optimization. Also It neatly integrated my already existing flake and modules.
I'll showcase the process in [one of my existing projects](https://git.4o1x5.dev/4o1x5/producer-consumer/). It's a basic rust workspace with 2 services and a common package including some classes. But for the sake of eliminating confusion I'll also write down how to make a `cargo-hakari` workspace with [crane](https://crane.dev)
## Create the project
Crane has a template that generates all the code needed to start off.
```bash
nix flake init -t github:ipetkov/crane#quick-start-workspace
```
Upon running this command in a directory you are greeted with a `cargo-hakari` workspace setup.
```
├── Cargo.lock
├── Cargo.toml
├── deny.toml
├── flake.nix
├── my-cli
│ ├── Cargo.toml
│ └── src
│ └── main.rs
├── my-common
│ ├── Cargo.toml
│ └── src
│ └── lib.rs
├── my-server
│ ├── Cargo.toml
│ └── src
│ └── main.rs
└── my-workspace-hack
├── Cargo.toml
├── build.rs
└── src
└── lib.rs
```
## Adjusting the flake.nix to our needs
Since your usecase will most likely differ from mine it's important to keep in mind what to change during the process of development.
Whenever you add a new crate to the project sadly you will need to manually sync that up with your `flake.nix` file so during compilation `hakari` knows where to find the packages. If you forget about this you will get errors stating that cargo cannot find the crates...
```nix
fileSetForCrate = crate: lib.fileset.toSource {
root = ./.;
fileset = lib.fileset.unions [
./Cargo.toml
./Cargo.lock
./producer
./consumer
./common
crate
];
};
```
### Don't forget essential packages!
If you application uses any imported crates you will 100% require `pkg-config`. If you need to connect to the internet, or make any connections with HTTP you will need `openssl`. I still haven't figured out why `libiconv` is needed but after searching for it a bit it most likely does some encoding. I'm not sure if your project will need it but it can't hurt much for leaving it in.
```nix
# Common arguments can be set here to avoid repeating them later
commonArgs = {
inherit src;
strictDeps = true;
buildInputs = with pkgs; [
openssl
pkg-config
libiconv
] ++ lib.optionals pkgs.stdenv.isDarwin [
pkgs.libiconv
pkgs.openssl
pkgs.pkg-config
];
nativeBuildInputs = with pkgs;[
openssl
pkg-config
libiconv
];
};
```
### Define packages
Another example from the project, at the bottom of the variable definitions (`let`-`in`) you will need to copy paste the code below and just replace the details, like for example:
If you have a `consumer-two` you will need to define that package as a variable then use it in the crane configs.
```nix
producer = craneLib.buildPackage (individualCrateArgs // {
pname = "producer";
cargoExtraArgs = "-p producer";
src = fileSetForCrate ./producer;
});
consumer = craneLib.buildPackage (individualCrateArgs // {
pname = "consumer";
cargoExtraArgs = "-p consumer";
src = fileSetForCrate ./consumer;
});
```
```nix
packages = {
inherit consumer producer;
}
```
## Compile a service into a container
After you have defined you packages you can just go ahead and use `buildLayeredImage` from `dockerTools` to wrap the package in a container.
I also included some additional packages here for safety.
```nix
consumer-container = pkgs.dockerTools.buildLayeredImage {
name = "consumer";
tag = "latest";
contents = with pkgs; [
cacert
openssl
pkg-config
libiconv
];
config = {
WorkingDir = "/app";
Volumes = { "/app" = { }; };
Entrypoint = [ "${consumer}/bin/consumer" ];
};
};
```
After this running the nix build command will make a syslink called `result`.
```bash
nix build .#consumer-container
```
## Automate with ~~github~~ forgejo actions
After building the image with nix, you get a `result` syslink file, it's a tar.gz linux container image file. We can use docker to import that tar.gz file and tag the image, then upload it to some repository.
```bash
nix build .#consumer-container
docker image load --input result
docker image tag consumer:latest git.4o1x5.dev/4o1x5/consumer:latest
docker image push git.4o1x5.dev/4o1x5/consumer:latest
```
We can use these commands in an workflow by using _&&_ to run them sequentially.
```yml
name: CD
on:
push:
branches: ["master"]
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout repo
uses: https://github.com/actions/checkout@v4
with:
repository: '4o1x5/producer-consumer'
ref: 'master'
token: '${{ secrets.GIT_TOKEN }}'
-
name: Set up QEMU for docker
uses: https://github.com/docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: https://github.com/docker/setup-buildx-action@v3
-
name: Set up nix cachix
uses: https://github.com/DeterminateSystems/magic-nix-cache-action@main
-
name: Login to git.4o1x5.dev container registry
uses: docker/login-action@v3
with:
registry: git.4o1x5.dev
username: ${{ secrets.GIT_USERNAME }}
password: ${{ secrets.GIT_TOKEN }}
-
name: Setup nix for building
uses: https://github.com/cachix/install-nix-action@v27
with:
# add kvm support, else nix won't be able to build containers
extra_nix_config: |
system-features = nixos-test benchmark big-parallel kvm
-
name: Build, import, tag and push consumer container
run: |
nix build .#consumer-container && \
docker image load --input result && \
docker image tag consumer:latest git.4o1x5.dev/4o1x5/consumer:latest && \
docker image push git.4o1x5.dev/4o1x5/consumer:latest
```

View file

@ -0,0 +1,154 @@
---
title: Avoid GitHub, selfhost a forgejo instance now
description: GitHub has long been the de facto place for hosting code, but as forgejo is getting federation support its a better idea to just host your own GitHub
date: 2024-04-25 00:00:00+0000
image: yancy-min-842ofHC6MaI-unsplash.jpg
categories:
- Blog
- Guide
- Sysadmin
tags:
- Nix
- Nginx
- GitHub
- Forgejo
- Selfhost
- Homelab
draft: false
---
## The idea
The coding community has deemed GitHub as the de facto platform for hosting code on.
However, there's a catch - the backbone of GitHub belongs to Microsoft, who
utilizes their power to impose restrictive license agreements on users. Unbeknownst to
many, signing up with GitHub grants them permission to train your code for Copilot,
which is then sold by Microsoft for profit.
By choosing to self-host a Git instance, you retain complete control over the safety and uptime of your data. This realization
led me to leave GitHub behind and instead opt for alternative platforms like forgejo,
which is set to introduce [federation support](https://forgefed.org/) in the near future - similar to the fediverse. This innovative concept will enable users to contribute to each other's
repositories through pull requests, issues, and comments by using their own instances, creating a more
interconnected and collaborative environment. I will guide you through
the process of hosting Forgejo on NixOS.
### Forgejo vs Gitea
Gitea is a great software, sharing many similarities with Forgejo. However, the primary distinction
lies in the backing of Gitea's development - a for-profit company - which may lead to diverging
priorities when it comes to users. In contrast, Forgejo is maintained by a non-profit organization, allowing for a more concerted
effort towards community needs. This focus on community translates into a superior ability to address
security concerns. Additionally, while Gitea relies on GitHub Actions for development, Forgejo leverages
its own custom actions, providing an extra layer of autonomy. Moreover, Gitea abandoned their federation
project around two years ago, whereas Forgejo is actively developing theirs.
## NixOs
### Forgejo
It's really simple to host a forgejo instance on nix as there are already predefined options for it made by the community.
```nix
{ pkgs, config, ... }:{
services.postgresql.enable = true;
services.forgejo = {
enable = true;
settings = {
server = {
# You can just replace the following two if you don't have a hostname set.
DOMAIN = "git.${config.networking.domain}";
ROOT_URL = "https://git.${config.networking.domain}/";
DISABLE_REGISTRATION = true;
DISABLE_SSH = true;
};
DEFAULT.APP_NAME = "My git server";
actions.ENABLED = true;
};
database = {
type = "postgres";
createDatabase = true;
};
};
}
```
### Nginx reverse proxy
```nix
{ pkgs, config, ... }:
{
services.nginx = {
enable = true;
virtualHosts = {
"git.${config.networking.domain}" = {
forceSSL = true;
enableACME = true;
locations."/" = {
proxyPass = " http://127.0.0.1:3000";
};
};
};
};
# enable automatic certification fetching via Let's Encrypt
security.acme = {
acceptTerms = true;
defaults.email = "admin+acme@${config.networking.domain}";
};
}
```
### Deploying
After you have written these two configurations onto some file like `configuration.nix`, you can rebuild the system and see that forgejo is up and running.
```
sudo nixos-rebuild switch
```
### Runners / Actions
Forgejo has runners that you can use with workflows to build software on every push, pull request merge. We will be setting that up too. If you noticed I already defined `actions.ENABLED` in the forgejo config.
1. If you have not yet created a profile on the instance go ahead. If its the first profile it will automatically be asigned `administrator`
2. Got to `site administration` (top right).
3. Select actions on the left, then runners.
4. Create a new runner token.
5. Paste it in the following config under `token`
```nix
{pkgs, config, ...}:{
services.gitea-actions-runner.instances = {
root = {
enable = true;
url = "127.0.0.1:${toString services.forgejo.settings.server.HTTP_PORT}";
token = "place your token here";
settings = {
container = {
# internet access for container
network = "bridge";
};
};
labels = [
"debian-latest:docker://node:18-bullseye"
"ubuntu-latest:docker://node:18-bullseye"
];
# define the hostname so we know what server the runner is on.
name = "${config.networking.hostname}@${config.networking.hostName}";
};
};
}
```
If you want more runner images [you can find them here](https://github.com/nektos/act/blob/master/IMAGES.md)
### Rebuild once again
```
sudo nixos-rebuild switch
```
### Enjoy
This is all it takes to fully set up the instance. After rebuilding you can see its up and running.

View file

@ -11,6 +11,7 @@ tags:
- Servers
- VPS
draft: false
toc: true
---
I wanted to host a few services for myself in 2020 and have decided to rent out a medium VPS at contabo. I stayed at them for exactly two years then I got some servers at home and decided to cancel my subscription. It was a pleasureful experience with them during that time period. Despite many controversies I saw on some reddit posts I had 0 downtime and 0 techinal difficulties.

View file

@ -1 +0,0 @@
version: "3"

View file

@ -1,5 +1,5 @@
baseURL = "https://4o1x5.dev"
languageCode = "en-us"
lang = "en"
title = "4o1x5.dev"
theme="archie"
@ -9,7 +9,9 @@ pygmentscodefences = true
pygmentscodefencesguesssyntax = true
paginate=3
paginate=8
[params]
mode="auto" # color-mode → light,dark,toggle or auto
@ -17,18 +19,15 @@ paginate=3
mathjax = true # enable MathJax support
katex = true # enable KaTeX support
customcss = ["css/purple.css", ]
name="4o1x5"
about="Software developer, privacy and libre advocate."
[params.listening_to]
title = "too late to be sorry"
artist = "CXSMPX"
url = "https://example.com"
[[params.social]]
name = "Forgejo"
icon = "forgejo"
url = "https://git.4o1x5.dev/4o1x5"
[[params.social]]
name = "Matrix"
icon = "message"
url = "https://matrix.to/#/@4o1x5:4o1x5.dev"
# Main menu Items
[[menu.main]]
name = "Home"
@ -44,7 +43,3 @@ weight = 2
name = "Frontends"
url = "/page/privacy-frontends"
weight = 3

37
static/robots.txt Normal file
View file

@ -0,0 +1,37 @@
User-agent: GPTBot
Disallow: /
User-agent: ChatGPT-User
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: PerplexityBot
Disallow: /
User-agent: Amazonbot
Disallow: /
User-agent: ClaudeBot
Disallow: /
User-agent: Omgilibot
Disallow: /
User-Agent: FacebookBot
Disallow: /
User-Agent: Applebot
Disallow: /
User-agent: anthropic-ai
Disallow: /
User-agent: Bytespider
Disallow: /
User-agent: Claude-Web
Disallow: /
User-agent: Diffbot
Disallow: /
User-agent: ImagesiftBot
Disallow: /
User-agent: Omgilibot
Disallow: /
User-agent: Omgili
Disallow: /
User-agent: YouBot
Disallow: /
User-agent: ia_archiver
Disallow: /

@ -1 +1 @@
Subproject commit d8819d5eee8b0817f41bda3a9dc2100cd6b2b0bd
Subproject commit 9702455db1833f7bd553ce261fb6a647db11f0da