63 Commits

Author SHA1 Message Date
2abb5ec721 fix some mistakes in luks2 2024-09-09 11:44:05 +02:00
dcf7021bd4 ssh: add instructions on clipboard sharing 2024-01-25 16:13:23 +01:00
5e111ba430 nextcloud: cleanup 2024-01-03 13:00:35 +01:00
d95fd3186e fix APCu instructions, add redis 2024-01-03 12:52:20 +01:00
ca8519bfac fix nextcloud link 2024-01-03 00:25:48 +01:00
c7c1dfd44e add dnsmasq instructions for DNS override 2024-01-03 00:24:06 +01:00
2182cbabd6 add HSTS section 2024-01-03 00:11:50 +01:00
207b443898 add ACPU section, mention NAT loopback issue 2024-01-02 21:45:27 +01:00
a83af847be installation instructions for nextcloud added 2024-01-02 20:25:30 +01:00
48ad81b8d0 explain drop-off folder usecase 2024-01-02 12:59:53 +01:00
c392dbe430 add note for nextcloud as alternative to caldav, nextcloud android sync and current state of server setups 2024-01-02 12:47:57 +01:00
63146f2419 add calcurse sync and khard sync instructions 2024-01-02 12:36:11 +01:00
a4e9e4ae1b add rudimentary nextcloud docs with drop-off link sharing instructions 2024-01-01 17:22:52 +01:00
022dc742bb add khard tips to neomutt 2024-01-01 17:05:38 +01:00
6f9e095d7f added LaptopSetup 2023-10-17 15:37:21 +02:00
d9e33fc1de remove references to sda3 2023-10-17 15:09:34 +02:00
e17201e8e9 remove dummy 1M partition, fix typos and add git 2023-10-13 11:45:53 +02:00
315d3b0317 Add restic 2023-09-28 12:39:34 +02:00
995462adb9 fix adb devices command 2023-08-11 13:35:02 +02:00
87ee25281c add WiFi Credentials note 2023-08-01 17:06:08 +02:00
4df61b19c1 added chromecast with Google TV default launcher disabling guide 2023-08-01 16:53:57 +02:00
8d7835c833 explain port for nginx 2023-07-30 11:37:12 +02:00
c8add5bafe merge 2023-07-29 15:16:12 +02:00
dd3965a7d0 added Searx guide 2023-07-29 15:10:13 +02:00
20552917a9 Fixed wrong link to anki 2023-07-08 15:51:21 +02:00
ac98999f97 Add calcurse sync 2023-07-08 15:21:16 +02:00
9a70969f75 fix dir 2023-07-04 12:37:13 +02:00
ad599eb2d7 syntax highlighting for service files 2023-07-04 12:35:43 +02:00
5b03c28feb added docker 2023-07-04 12:32:25 +02:00
121e3224ea fix nginx setup 2023-07-04 11:14:19 +02:00
dc9d9085ce more env vars for cleanliness 2023-07-04 11:05:45 +02:00
ce4bef230c fix ankidroid msync 2023-07-04 10:52:26 +02:00
fdf6755f9d added anki sync server instructions 2023-07-04 10:48:25 +02:00
9726accb0e auto-mounting additional hard-drives added 2023-07-01 15:03:43 +02:00
9a1a652e44 added potential solutions for hibernation keyfile 2023-06-29 10:07:41 +02:00
377abbef49 smaller fixes 2023-06-27 10:53:25 +02:00
9c0ff364e3 smaller clarifications 2023-06-27 10:50:02 +02:00
35f36f119b added luks2 tutorial 2023-06-27 10:41:50 +02:00
2d4f98a2f1 chmod -x LICENSE 2022-11-15 17:36:32 +01:00
597762e7b0 Merge branch 'master' of https://github.com/AlexBocken/mykb 2022-11-15 17:35:21 +01:00
999ba81369 added nvidia 2022-11-15 17:33:50 +01:00
25aa98d47a Merge remote-tracking branch 'refs/remotes/origin/master' 2022-10-07 21:04:36 +01:00
92c2d15a3e Update 2022-10-07 21:03:37 +01:00
bd1e99e154 update matlab instructions for R2022a 2022-05-24 11:25:49 +02:00
2730308da8 add spellcheck instructions to qutebrowser 2022-05-05 15:20:38 +02:00
c79ffe5fe6 fix md 2022-03-13 14:05:54 +01:00
606282fadd added skip youtube ads userscript tutorial 2022-03-13 14:04:23 +01:00
779518ce61 added pass-git-helper config instructions to git 2022-03-07 11:41:20 +01:00
d57858def9 seperate server from desktop 2022-03-07 11:34:18 +01:00
92a56a2dec added matlab instructions 2022-03-07 11:32:50 +01:00
c61a8b190d added restart nginx instruction to php 2022-02-21 15:39:39 +01:00
76109d76ff added initial php installation guide 2022-02-21 15:30:48 +01:00
f59434b1d9 Server Setup outline - really 2022-01-16 17:06:27 +01:00
98413f798f added server setup outline 2022-01-16 17:05:00 +01:00
3cc199d48c Formatting 2022-01-16 13:21:15 +01:00
72e269fcf9 Formatting 2022-01-16 13:20:36 +01:00
f8d41d7787 Grammar as always 2022-01-16 13:17:32 +01:00
3ebb237a6c Added Rainloop 2022-01-16 13:13:46 +01:00
ba735972b1 Update neomutt 2022-01-16 12:45:02 +01:00
287bd24257 add initial neomutt 2022-01-16 11:32:58 +01:00
2a17f8f933 Fixed GIT link 2022-01-02 09:58:52 +01:00
832caa536c Fixed GIT link 2022-01-02 09:58:20 +01:00
c940054117 Added GIT 2022-01-02 09:57:02 +01:00
27 changed files with 3048 additions and 11 deletions

View File

@ -1,9 +1,20 @@
# mykb
My knowledge base for misc. linux desktop and server setups/configurations
This is a collaborative effort of Till Dieminger and me as our needs are similar enough to warrant a shared knowledge base.
For dekstop environments a heavy Arch bias is to be expected but a lot of stuff can be applied in many distros. Switching out pacman/paru for the appropriate package manager of your choice will have to be done of course.
For dekstop environments a heavy Arch bias is to be expected but a lot of stuff can be applied in many distros.
Switching out pacman/paru for the appropriate package manager of your choice will have to be done of course.
For server stuff I'm mostly using Debian. See [landchad.net](https://landchad.net) for a similar project centered around mostly server stuff.
For server stuff I have been mostly using Debian. Newer additions will assume Arch Linux on your server.
This is because of Debian's ancient versions of software (python 3.7 is over 5 years old now) and the resultant overuse of docker or installing everything from source.
If I `git clone` all my software why do I even need a package manager?
To struggle with stupidly outdated versions of `python` and `nodejs`? No thanks.
Newer additions will assume Arch Linux on your server.
Most should be compatible with Debian as well though as Arch oftentimes just ships more minimalistic configs.
Assuming of course that the two distros versions are comptatible.
See [landchad.net](https://landchad.net) for a similar project centered around mostly server stuff.
## Current state

View File

@ -1,4 +1,4 @@
# General
- [ ] create a script as a wrapper to these docs similar to tldr/kb
- [ ] create wrapper script for md to html export
- [X] create wrapper script for md to html export
- Maybe usage of bundestag wrapper scrips?

View File

@ -0,0 +1,44 @@
# Chromecast with GoogleTV
While being a great SmartTV replacement the default set-up does not allow for much customization and has annoying ads included.
## Changing the Default Launcher
You will need:
- A Chromecast with GoogleTV
- A Laptop with `adb` installed. (On Arch: part of the `android-tools` package)
- A Laptop with Thunderbolt or USB-C which allows for high power throughput to power the Chromecast as well as connect via ADB.
Google, being Google, does not allow for the disabling of Ads in their default Launcher.
This is a tutorial on how you can disable the default launcher and replace it with one of your choice.
We're assuming you're using a Chromecast with Google TV similar to [this one](https://www.digitec.ch/de/s1/product/google-chromecast-mit-google-tv-4k-google-assistant-streaming-media-player-14676764).
### Download a Launcher of your choice
Go to the Google Play Store and choose any Launcher you would like to use. Good ones are FLauncher or Launchy for a more minimalistic approach.
Ensure that the Launcher is installed and working before proceeding.
### Enable Developer Options
Go to `Settings -> Device -> About -> Build` and press the main button about 10 times until a Dialog pops up claiming you're now a developer.
### Connect your Laptop
Plug the Power Cord of the Chromecast into your Laptop. You will most likely require a USB-C to USB-C cable instead of the included USB-A to USB-C one. The Chromecast should now be able to boot up without the low-power warning. If you're getting the low-power warning you cannot continue and might require a different laptop with better Thunderbolt/USB-C support.
On the chromecast there should now pop-up a dialog asking whether you want to trust the connected device. Trust it.
### Disable the Default Launcher via ADB
On your Laptop, open a terminal and ensure that you can find the chromecast via `adb devices -l`. One device should be listed.
Then, use these commands:
```sh
adb shell pm disable-user --user 0 com.google.android.apps.tv.launcherx
adb shell pm disable-user --user 0 com.google.android.tungsten.setupwraith
```
This should have disabled the default launcher. When pressing home, a dialogue should pop up asking for a new default Launcher if multiple are installed.
Your WiFi Credentials might be forgotten for some reason after these steps.
You can just re-add them in your settings and they should persist from now on.
### Re-Enable the Default Launcher via ADB
In case you want to revert these changes you can use these commands to do so:
```sh
adb shell pm enable com.google.android.apps.tv.launcherx
adb shell pm enable com.google.android.tungsten.setupwraith
```

114
docs/GIT.md Normal file
View File

@ -0,0 +1,114 @@
# General
GIT is a version control software, that allows you to save the progress of software/text/whatever development.
It is probably best know from GitHub, but we will show how to set up your own GIT instance and how to use it.
## Installing GIT
### What you need
1. A working server, being it self-hosted at home or a remote instance, called REMOTE in the following
2. A local machine that you develop whatever on, called LOCAL in the following
### Installing GIT
On the LOCAL machine, use your favorite package manager, for example
```sh
pacman -S git
```
The same holds for the REMOTE machine, but here I would advice, to use some LTS distro, so probably
```sh
sudo apt install git
```
### Setting up the Server
First we have to add the git-user on the REMOTE, give him a password and enable ssh logins.
```sh
sudo adduser git
su git
passwd
cd
mkdir .ssh & chmod 700 .ssh
touch .ssh/authorized_keys && chmod 600 .ssh/authorized_keys
```
Now add the ssh public keys of your LOCAL machine to the `authorized_keys` file on the REMOTE.
For this on the LOCAL machine generate a key-pair using `ssh-keygen -t rsa` if you don't have one yet.
Then copy the content of `LOCAL/.ssh/id_rsa.pub` to the `REMOTE/.ssh/authorized_keys` file.
## New Repository
To initialize a repository on the REMOTE server we have to create a new folder and tell git to track this folder.
This has to be done once for every new repository.
```sh
cd
mkdir NewRepo.git
cd NewRepo.git
git init --bare
```
On the LOCAL machine we then have to create a folder and tell git to sync this with the server.
We will assume that `REMOTE` is either the IP or the domain-name of the REMOTE instance.
```sh
cd project
git init
git add .
git commit -m 'Initial commit'
git remote add origin git@REMOTE:/home/git/NewRepo.git
git push origin master
```
## Using Git
To now sync this folder to other devices use
```sh
git clone git@gitserver:/home/git/NewRepo.git
cd project
```
To update the repository go to the folder, add the necessary files using `git add <FILES>` and then commit them using `git commit -m '<MESSAGE>`. These steps can be done as one using
```sh
git commit -am 'Fix for README file'
```
Now push it to the server using `git push origin master`.
### Branches
To create a new branch, use `git checkout -b <BRANCHNAME>`.
To push this to the remote location, use `git push origin <BRANCHNAME>`.
## Configuration
### Pass integration
pass is a CLI password manager. It allows for git integration.
First, install `pass-git-helper` from the AUR
```sh
paru -S pass-git-helper
```
Set pass as your credential helper in git:
```sh
git config --global credential.helper /usr/bin/pass-git-helper
```
In `~/.config/pass-git-helper/git-pass-mapping.ini`, create rules in the following way:
```ini
[github.com]
target=dev/github
[*.fooo-bar.*]
target=dev/fooo-bar
```
## Further Info
- [Git Website](https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server)

25
docs/LaptopSetup.md Normal file
View File

@ -0,0 +1,25 @@
# LaptopSetup
General Tips and tricks for setting up Laptops in particular. Assuming Arch Linux/systemd.
## Power/Hibernation
We want to not edit pacman-provided files but provide drop-ins.
Hence create the folder `/etc/systemd/logind.conf.d` if not already present.
All the following settings will be written into `/etc/systemd/logind.conf.d/logind.conf`
### Let DWM handle PowerOff
```conf
[Login]
HandlePowerKey=ignore
```
### Hibernate on Lid close
```conf
[Login]
HandleLidSwitch=hibernate
HandleLidSwitchExternalPower=hibernate
```

113
docs/Searx.md Normal file
View File

@ -0,0 +1,113 @@
# Searx on Arch
This tutorial is on how to install Searx on Arch servers.
On Debian or other distros lacking morty, filtron, and searx in their repos the guide giving by the Searx devs themselves is fine but you will have to rely on Python VENVs and updating is difficult/tedious.
For this tutorial we will follow the recommended setup of installing morty and filtron alongside searx for a more secure setup.
For this tutorial we are assuming you already have nginx set up, a SSL certificate for the domain you want to use, and the domain we use as a dummy is `example.com`.
## Installation
Switch to a non-root user with sudo rights for an AUR manager:
```sh
su - alex
paru -S morty-git filtron-git searx
```
## Configuration
### Services
#### Morty
First we need a morty secret key which should be base64 encoded:
```sh
openssl rand -hex 16 | base64
```
Edit the `ExecStart` in `/usr/lib/systemd/system/morty.service`:
```ini
ExecStart=/usr/bin/morty -listen 127.0.0.1:3000 -key '<your_key_here>' -timeout 5
```
and add
```ini
Environment=DEBUG=false
```
We also need to add this to our `/etc/searx/settings.yml`:
```yml
result_proxy:
url: example.com/morty/
key: !!binary "<your_key_here>"
```
### Filtron
Should be good with defaults
### Searx
### Sytemd
Adjust your service file for searx (`/etc/uwsgi/searx.ini`) to include
```ini
# comment out the http-socket line
http = 127.0.0.1:8888
env = LANG=C.UTF-8
env = LANGUAGE=C.UTF-8
env = LC_ALL=C.UTF-8
# OPTIONAl and does nothing if disable-logging = true
logger = systemd
```
#### settings.yml
Change the following lines in `/etc/searx/settings.yml`
```yml
server:
image_proxy: True
http_protocol_version: "1.1"
ui:
theme_args:
oscar-style: logicodev-dark
# Ensure that this is also set to something, should be done automatically by the PKGBUILD for searx
server:
secret_key: "<ensure_this_is_set_to_something_secure>"
```
#### Nginx
In the appropriate `server{ listen 443 ssl; }` section of your nginx setup add the following:
Where `MINOR_VERSION` should be `11` for example for python 3.11, adjust appropriately.
```nginx
location /searx/static/ {
alias /usr/lib/python3.<MINOR VERSION>/site-packages/searx/static/;
}
location /morty {
proxy_pass http://127.0.0.1:3000/;
proxy_set_header Host $host;
proxy_set_header Connection $http_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
}
location /searx{
proxy_pass http://127.0.0.1:4004/;
proxy_set_header Host $host;
proxy_set_header Connection $http_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Script-Name /searx;
}
```
Verify via `nginx -t`, then we are ready to start our services.
```sh
systemctl daemon-reload
sysetmctl restart nginx
systemctl enable --now morty
systemctl enable --now filtron
systemctl enable --now uwsgi@searx
```
You should now be able to use searx @ https://example.com/searx

63
docs/ServerSetup.md Normal file
View File

@ -0,0 +1,63 @@
# PreRequisites
1. A domain name provider ([EPIK](epik.com), etc)
2. A VPS provider ([vultr](vultr.com), etc)
# Set DNS Records
1. Get the IP of your server from your VPS provider.
2. Enable Reverse DNS for IPv6
3. Enter the IP in to the DNS system interface of you DNS provider.
- Enable IPv4 and IPv6 this way.
# Server
- `ssh-copy-id root@domain.xyz`
- Edit /etc/ssh/sshd_config : `UsePAM no` and `PasswordAuthentication no` and restart ssh using `systemctl reload sshd`
- `apt update; apt upgrade` and delete scetchy line from `.bashrc`.
- install webserver stuff `apt install nginx python3-certbot-nginx rsync`
# Website
- In `/etc/nginx/sites-available` copy `default` to `domainname`.
- Here change the root line to `root /PATH/TO/WEBSITE`
- Change the `server_name` line to `server_name HOSTNAME.xyz www.HOSTNAME.xyz`
- Copy this file to make the mail server and change `root` again to something relatable like `root /var/www/mail`.
- Change the `server_name` to mail.HOSTNAME.xyz and www.mail.HOSTNAME.xyz
- Now link both files to `/etc/nginx/sites-enabled/` using `ln -s /etc/nginx/sites-available/mail /etc/nginx/sites-enabled/`
- Create the directories with `mkdir -p /var/www/domainname /var/www/mail` and add a `index.html` to both of them.
## RSYNC command
`rsync -uvrP --delete-after LOCAL root@HOSTNAME.xyz:/var/www/name/`
## CERTBOT
Run `certbot --nginx` and follow the hints on the screen.
It guides you quite detailed through the procedure.
Make sure that in the end you select the port-forwarding.
## MAIL
Use `emailwiz` from `lukesmith.xyz/emailwiz.sh` and run using `internet page` and replace guest.guest with domainname
Copy the output to the txt records on epik.com with mail._domainkey.HOSTNAME.xyz
Add the wanted user using `useradd -G mail -m username` and add password use `passwd username`
To enable the email to pass, you need to set the firewall correctly.
Next to the ports listed below, sometimes port 25 can be probelmatic.
Make sure to use `ufw` to open these ports and also use your VPS interface to open these ports if necessary.
| Server | Protocol | Port | Handshake | Role |
| :--- | :--- | :--- | :--- | :--- |
| mail.HOSTNAME.xyz | SMTP | 587 | STARTLS | Outgoing |
| mail.HOSTNAME.xyz | IMAP | 993 | TSL/SSL | Incoming |
Also set the MX records on you dns service provider and let it point to `mail.HOSTNAME.xyz`.
# Possible Hickups on the way
- If you had that domain already set up on a server with a different IP address, you have to clean out your local `.ssh/known_hosts` before you can connect using `ssh`.
- Make sure that the config files for nginx include `listen 80; listen [::]:80;`, otherwise the certbot install will fail.

102
docs/anki_sync_server.md Normal file
View File

@ -0,0 +1,102 @@
# Anki Sync Server
With the new versions of Anki, `anki` now provides an integrated sync-server feature, allowing for up-to-date scheduler versions as long as anki on the server is also updated regularly.
Other implementations such as [Anki Sync Server](https://github.com/dsnopek/anki-sync-server) might be less resource intensive but need to be updated separately to allow for newer scheduler versions.
This requires quite a bit of memory, but a lot if it is shared. If you run anything else using python (very likely), running this sync server in addition should maybe require an additional 100-200M.
## Installation
Install anki: `paru -S anki`
We're assuming here that you are running the latest Anki on your server, however you manage to do that (some distros are quite conservative with their anki versions). On Arch, I currently maintain the `anki` and `anki-qt5` packages in the AUR so they should be up-to-date.
## Reverse Proxy using nginx
Anki creates a sync server locally on 0.0.0.0:8080. We want to put this behind a reverse proxy for convenience.
Create a new `server{}` section in your nginx setup. Recommended is a new file in `/etc/nginx/sites-available/anki_sync_server`
```nginx
server {
server_name anki.<yourdomain.tld>;
listen 80;
client_max_body_size 500M;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_pass http://0.0.0.0:8080;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
```
Some of these settings are a bit overkill for anki, but are good defaults for modern web applications behind a reverse proxy.
Link to enabled sites:
```
ln -s /etc/nginx/sites-available/anki_sync_server /etc/nginx/sites-enabled/
```
Check whether the syntax is good via `nginx -t` and if so, restart nginx `systemctl restart nginx`.
This is still unencrypted. Using certbot you can now deploy certificates
```sh
certbot --nginx -d anki.<yourdomain.tld>
```
If everything goes good you should be able to verify in `/etc/nginx/sites-available/anki_sync_server`.
## Create a user and service
Personally, I see this sync data as a kind of database and would like to store it in `/var/lib` because of this.
For security we should start anki as a separate user with write permissions confined to `/var/lib/anki`.
Create a user:
```sh
useradd -b /var/lib/ -s /usr/bin/nologin anki
mkdir /var/lib/anki
chown -R /var/lib/anki anki:anki
```
Using systemd, create a service file: `/etc/systemd/system/anki_sync_server.service`:
```ini
[Unit]
Description=Personal Anki Sync Server
After=network.target
[Service]
ExecStart=anki --syncserver
Restart=always
User=anki
Group=anki
Environment=SYNC_BASE="/var/lib/anki"
Environment=MAX_SYNC_PAYLOAD_MEGS=500
Environment=SYNC_USER1=<name1>:<password1>
Environment=SYNC_USER2=<name2>:<password2>
[Install]
WantedBy=multi-user.target
```
You can create additional users using the `SYNC_USER<i>` environment variables. This stores the passwords in plain text on the machine so is less than optimal.
TODO: can we somehow store these env vars securely?
You should now be able to start your sync server via `systemctl start anki_sync_server.service`.
If everything looks good in the journal, you can `sytemctl renable anki_sync_server`.
## Connecting from your Client
### Desktop
1. Go to: `Tools -> Preferences -> Syncing`
2. Logout
3. set "Self-hosted-sync-server" to `https://anki.<yourdomain.tld>`
5. Restart anki
6. Click on `Sync` and login using your `<name1>` and `<password1>` which you set in the service file.
## Ankidroid
1. Go to: `Settings -> Advanced -> Custom sync server`
2. Set the sync url to: `https://anki.<yourdomain.tld>`
3. Set the media sync url to `https://anki.<yourdomain.tld>/msync`
4. Click on the sync icon in the main top-bar. Login using your `<name1>` and `<password1>` you set in the service file.
## More info
See https://docs.ankiweb.net/sync-server.html

3
docs/beancount.md Normal file
View File

@ -0,0 +1,3 @@
# BEANCOUNT
TBD

174
docs/calDAV.md Normal file
View File

@ -0,0 +1,174 @@
# CalDAV Server with Calcurse
### Goal
- Set up a own caldav server which allows to sync [calcurse](https://www.calcurse.org/) with your other devices.
If you want to run nextcloud anyways, you can also use its caldav server.
This is a more light weight solution, which does not require a full php environment.
### Software used
- A current debian install is assumed, using nginx as its sever. Tested on debian 11.
- [Baikal](https://sabre.io/baikal/)
- Other more light weight setups possible, see [Radicale](https://radicale.org/v3.html) or [carldav](https://github.com/ksokol/carldav). Did not work with calcurse directly. Planned for the future, as it does not require a php environment.
- [Davx^5 Android](https://www.davx5.com/)
### Install
1. Make sure all the dependencies are installed
```sh
sudo apt-get install nginx php-fpm php-sqlite3 composer php-xml php-curl -y
```
2. Go to your sources directory. Here it is assumed to be `/opt/src/` and install Baikal. Default port is 9999, so adjust it to your wishes. Assumed to be 9999 throughout this write-up.
```sh
cd /opt/src
git clone https://github.com/sabre-io/baikal
cd baikal
composer install
```
3. Make the baikal directory writable by the websever process. This is strictly necessary for `Specfic` and `config`.
```sh
chown -R www-data:www-data Specific config
```
I found an issue, that maybe got solved by owning the whole baikal directory. So in case you find yourself with an error related to write-permission denials, run
```sh
sudo chown -R www-data:www-data .
```
### Server Config
1. Create the corresponding nginx config for the page.
```sh
cd /etc/nginx/sites-available
touch baikal.site
```
2. Copy the following config. Adjust the `root /opt/src/baikal/html` path for your install and make sure that the correct php-version. (See `php --version`).
```sh
server {
listen 9999 default_server;
root /opt/src/baikal/html;
dav_methods PUT DELETE MKCOL COPY MOVE;
index index.php index.html index.htm index.nginx-debian.html;
server_name _;
rewrite ^/.well-known/caldav /dav.php redirect;
rewrite ^/.well-known/carddav /dav.php redirect;
charset utf-8;
location ~ /(\.ht|Core|Specific|config) {
deny all;
return 404;
}
location ~ ^(.+.php)(.*)$ {
try_files $fastcgi_script_name =404;
include /etc/nginx/fastcgi_params;
fastcgi_split_path_info ^(.+.php)(.*)$;
fastcgi_pass unix:/run/php/php7.4-fpm.sock; #Adjust here for your version
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location ~ /.ht {
deny all;
}
}
```
3. Link the available site to the enabled ones
```sh
ln -s /etc/nginx/sites-available/baikal.site /etc/nginx/sites-enabled/
```
4. Restart nginx after testing the config files
```sh
nginx -t
systemctl restart nginx
```
5. Check if baikal is running on `<hostname/ip>:9999`.
### Baikal Config
1. Follow the setup guide, setting the time-zone, and enable the `basic` authentication type. If wanted, it is possible to send invite emails for upcoming events to its participants. If you are interested in this, check the web, as I did not go down that path.
2. Continue and select the SQLite data base and continue. If you have specific reasons to use SQL, you can do this with
```sh
mysql -u root -p
```
and then create a new baikal data-base.
```sql
CREATE DATABASE baikal;
CREATE USER 'baikal'@'localhost' IDENTIFIED BY '<YOUR BEST PASSWORD123>';
GRANT ALL PRIVILEGES ON baikal.* TO 'baikal'@'localhost';
FLUSH PRIVILEGES;
```
Add your selection of host, name and username to the page and continue. We assume a SQLite database.
3. We now log in to baikal using the admin user. Now we can create users. We create a `testuser` under the mail address `test@testing.ts`. Now we can adjust the default calender or add more calenders if we like. We can also enable or disable todo-sync or note-syncing.
### Calcurse Config
1. Make sure `calcurse-caldav` is available as a command .
2. Copy the config and adjust
```sh
[General]
### Adjust here when you also want to sync todo's and notes! (cal, todo, note)
SyncFilter = cal
DryRun = No
Verbose = Yes
AuthMethod = basic
Hostname = IPADRESS:9999
#Path = /dav.php/calendars/<username>/<calender-name>
Path = /dav.php/calendars/test/default
InsecureSSL = No
# I run this on a local server, which does not have https enabled.
# If you enable https on the baikal page, which is highly recommended when running it open to the web, change this to Yes
HTTPS = No
[Auth]
#Username = <username>
Username = test
#Either use plaintext password (not recommended...) or add your password to your CLI password manager (pass) under baikal/username
#Password = testpassword1234
PasswordCommand = pass baikal/username
```
3. Save and run `calcurse-caldav --init=two-way`. Other initialisation options exists and are explained [here](https://www.calcurse.org/files/calcurse-caldav.html). This does the initial sync between your baikal instance and calcurse.
4. For future sync, either
- set up a post-save and/or start hook running `calcurse-caldav`
- just run `calcurse-caldav` everytime you like to have things synced.
### Android
Some calendars have build in caldav support. For those follow their procedure.
If not, we can use Davx^5. Get it from F-Droid and drop in your URL, username and password. Set up a sync period and select the calendar in your calendar app.
In theory it is also possible to sync your address book.
### Future:
- Use some other caldav server, which might be more light weight.
- Test the note and todo sync
- Test the address-book sync, maybe with [abook](https://abook.sourceforge.io/)

80
docs/dnsmasq.md Normal file
View File

@ -0,0 +1,80 @@
# DNSMasq
A simple and lightweight DNS and DHCP server for local development.
Personally I have only yet used this to circumvent NAT Loopback issues with my router, but it can be used for much more.
## Installation
It's a simple
```sh
pacman -S dnsmasq
```
### Configuration
We need to disable the systemd-resolved service, as it will conflict with DNSMasq.
Afterwards we can start the DNSMasq service.
```sh
systemctl disable systemd-resolved.service
systemctl stop systemd-resolved.service
systemctl enable --now dnsmasq.service
```
We can now look into the configuration file at `/etc/dnsmasq.conf` and make changes to our liking.
```conf
listen-address=::1,127.0.0.1,192.168.1.1
```
More cached DNS queries:
```conf
cache-size=1000
```
(max 10000)
DNSSec validation:
```conf
conf-file=/usr/share/dnsmasq/trust-anchors.conf
dnssec
```
## DNS Forwarding
We will most likely not have all wanted DNS entries ourselves and should look these up on a different server.
We can do this by chaning `/etc/resolv.conf` to the following:
```conf
nameserver ::1
nameserver 127.0.0.1
options trust-ad
```
If we want Networkmanager to not overwrite this file, we can set it to immutable:
```sh
chattr +i /etc/resolv.conf
```
then restart Networkmanager:
```sh
systemctl restart NetworkManager.service
```
Now add your upstream DNS servers to `/etc/dnsmasq.conf`:
```conf
no-resolv
# Google's nameservers, for example
server=8.8.8.8
server=8.8.4.4
```
## Address Overrides
For NAT Loopback we need to override the DNS entries for our local network.
For example if we want to direct `cloud.example.com` to our server directly, we can add the following to `/etc/dnsmasq.conf`:
```conf
address=/cloud.example.com/192.168.1.2
```
adjust the IP address to your setup.
After restarting the dnsmasq service, we can check if the DNS entry is correct:
```sh
drill cloud.example.com
```
You can now set this DNS server as your primary DNS server in your router or on your local machine.

34
docs/docker.md Normal file
View File

@ -0,0 +1,34 @@
# Docker
General tips and tricks around docker, as it's usage has become unavoidable.
## Docker compose as systemd services
You will be able to start any docker compose program via `systemctl start docker-compose@<program>`.
Create the file `/etc/systemd/system/docker-compose@.service` with the following content:
```ini
[Unit]
Description=%i service with docker compose
PartOf=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/etc/docker/compose/%i
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose stop
[Install]
WantedBy=multi-user.target
```
Create directories as necessary and place your `docker-compose.yml` in an appropriately named folder (as an example: "myprogram") in `/etc/docker/compose`.
Ergo: Your docker-compose.yml should be in `/etc/docker/compose/myprogram/docker-compose.yml`.
Reload the daemon and start your service:
```sh
systemctl daemon-reload
sysetmctl start docker-compose@myprogram
```
More ideas:
https://gist.github.com/mosquito/b23e1c1e5723a7fd9e6568e5cf91180f

41
docs/johntheripper.md Normal file
View File

@ -0,0 +1,41 @@
# JohnTheRipper
This is a bunch of scripts to crack a bunch of different passwords offline
## Installation
```sh
git clone https://github.com/magnumripper/JohnTheRipper.git
cd JohnTheRipper/src
./configure && make
```
## Usage
### For pdfs
1. Create a hash of the pdf you want to open
```sh
cd JohnTheRipper/run
./pdf2john.pl <pdf file> > <output file>
```
The output file will be a hash file of the meta info of the pdf.
Will be refered to by hash-file from now on.
2. Crack the hash
```sh
cd JohnTheRipper/run
./john <hash file>
```
3. Retrieve the password
```sh
cd JohnTheRipper/run
./john --show <hash file>
```
The password will be dispalyed the format of `<path-to-pdf>:password`
```sh
/root/user/secred.pdf:54321
```

222
docs/luks2.md Normal file
View File

@ -0,0 +1,222 @@
# LUKS2 fully encrypted Arch-Linux
As the Key-derivation functions for LUKS1 are lacking but GRUB normally only supports LUKS1, additional steps are required to get a working fully encrypted LUKS2 encrypted hard drive.
The basic process is similar to a LUKS1 encrypted hard-drive but afterwards before the reboot into your installed OS additional measures need to be taken.
This works only with UEFI-systems.
In this tutorial we're assuming you want to install everything to /dev/sda and an ext4 FS.
BTRFS requires additional steps to my knowledge.
# Boot into ISO, create LVM and mount
We want two partitions: sda1: 500M, sda2: a lvm container for the rest for your encrypted hard-drive.
Create partition table via `cfdisk` or similar tools.
Note: for BIOS systems a dummy 1M parition would be also required. For UEFI this is not needed.
## Create LVM
```sh
cryptsetup luksFormat /dev/sda2
cryptsetup open /dev/sda2 cryptlvm
pvcreate /dev/mapper/cryptlvm
vgcreate vg /dev/mapper/crypylvm
```
Create your wanted partitions. Ergo something similar to:
```sh
lvcreate -L 8G vg -n swap
lvcreate -L 32G vg -n root
lvcreate -l 100%FREE vg -n home
```
and mkfs them:
```sh
mkfs.ext4 /dev/vg/root
mkfs.ext4 /dev/vg/home
mkswap /dev/vg/swap
```
and finally mount them. EFI should be mounted to `/mnt/efi`
If you have not yet created a filesystem on your efi partition, do so now:
```sh
mkfs.fat -F32 /dev/sda1
```
```sh
mount /dev/vg/root /mnt
mount --mkdir /dev/vg/home /mnt/home
swapon /dev/vg/swap
mount --mkdir /dev/sda1 /mnt/efi
```
## Continue with your normal Arch install:
Note the lack of grub in the pacstrap, we will build this later
```sh
pacstrap -K /mnt base base-devel git linux linux-firmware lvm2 efibootmgr networkmanager neovim ...
genfstab -U /mnt >> /mnt/etc/fstab
arch-chroot /mnt
echo YourHostName > /etc/hostname
nvim /etc/locale.gen
locale-gen
ln -sf /usr/share/zoneinfo/Europe/Zurich /etc/localtime
hwclock --systohc
passwd
```
## Edit /etc/mkinitcpio.conf to support encryption
In `/etc/mkinitcpio.conf` edit the HOOKS to include these highlighted ones as well:
```/etc/mkinitcpio.conf
HOOKS=(base __udev__ autodetect modconf kms keyboard keymap consolefont block __encrypt__ __lvm2__ filesystems fsck)
```
and rebuild initramfs:
```sh
mkinitcpio -P
```
## Create new user, download AUR helper, and install grub-improved-luks2-git
```sh
useradd -m -G wheel alex
passwd alex
```
Give him sudo permissions:
in `/etc/sudoers` add:
```/etc/sudoers
%wheel ALL=(ALL) ALL
```
Now install paru or equivalent AUR helper:
```sh
su - alex
git clone https://aur.archlinux.org/paru
cd paru
makepkg -si
paru -S grub-improved-luks2-git
```
We now have a patched GRUB installed and can continue as if we would encrypt using LUKS1 for now:
## Edit /etc/default/grub and grub-install
Get encrypted partition UUID into the /etc/default/grub via
```sh
ls -l /dev/disk/by-uuid >> /etc/default/grub
```
and adjust two things in the file:
```/etc/default/grub
GRUB_ENABLE_CRYPTODISK=y
```
and add to `GRUB_CMDLINE_LINUX_DEFAULT`: (can have multiple, space-separated arguments so don't delete anything if it's there, just add.)
```/etc/default/grub
GRUB_CMDLINE_LINUX="cryptdevice=UUID=device-UUID:cryptlvm"
```
and replace "device-UUID" with the uuid we got for `/dev/sda2` from the previous `ls` command. Of course remove all the trailing `ls` output.
```sh
grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB --recheck
grub-mkconfig -o /boot/grub/grub.cfg
```
## LUKS2 support
Now create an additional file in `/boot/grub/grub-pre.cfg` with the follwing content:
```/boot/grub/grub-pre.cfg
set crypto_uuid=device-UUID
cryptomount -u $crypto_uuid
set root=lvm/vg-root
set prefix=($root)/boot/grub
insmod normal
normal
```
and replace device-UUID with the same device-UUID as before, (again, a `ls -l /dev/disk/by-uuid >> /boot/grub/grub-pre.cfg` can help here to get the UUID for `/dev/sda2`)
Now we can overwrite our previously generated grubx64.efi with a luks2 compatible one:
```sh
grub-mkimage -p /boot/grub -O x86_64-efi -c /boot/grub/grub-pre.cfg -o /tmp/grubx64.efi lvm luks2 part_gpt cryptodisk gcry_rijndael argon2 gcry_sha256 ext2
install -v /tmp/grubx64.efi /efi/EFI/GRUB/grubx64.efi
```
We should now be done. `exit`, `umount -R /mnt`, and `reboot` into GRUB to see whether everything worked.
This still requires you to enter your passphrase twice but can be alleviated just as with the LUKS1 case:
## Only enter the password once
Create a keyfile:
```sh
dd bs=512 count=4 if=/dev/random of=/crypto_keyfile.bin iflag=fullblock
chmod 600 /crypto_keyfile.bin
cryptsetup luksAddKey /dev/sda2 /crypto_keyfile.bin
```
Add this to the initramfs:
```/etc/mkinitcpio.conf
FILES=("/crypto_keyfile.bin")
```
And rebuld via
```sh
mkinitcpio -P
```
And add this file to the `GRUB_CMDLINE_LINUX` in `/etc/default/grub`:
```/etc/default/grub
GRUB_CMDLINE_LINE="... cryptkey=rootfs:/crypto_keyfile.bin"
```
And again rebuild GRUB
```sh
grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB --recheck
grub-mkconfig -o /boot/grub/grub.cfg
grub-mkimage -p /boot/grub -O x86_64-efi -c /boot/grub/grub-pre.cfg -o /tmp/grubx64.efi lvm luks2 part_gpt cryptodisk gcry_rijndael argon2 gcry_sha256 ext2
install -v /tmp/grubx64.efi /efi/EFI/GRUB/grubx64.efi
```
# Auto-decrypt additional encrypted hard-drives on bootup
You can decrypt additional hard-drives automatically. For this we will use `/etc/crypttab` as well as `/etc/fstab`. This requires systemd to work.
Create your additional encrypted hard-drives if not already existant:
```sh
cryptsetup luksFormat /dev/sdX
cryptsetup open /dev/sdX YourDiskNameHere
mkfs.ext4 /dev/mapper/YourDiskNameHere
```
If you do not wish to have to enter the additional password on boot-up you will have to create a keyfile like we did for our /dev/sda2 above.
Of course this will lessen security as any additional hard-drives can also be decrypted if `/dev/sda2` has been decrypted or cracked.
Systemd can autodetec keys in `/etc/cryptsetup-keys.d` if they have the pattern `YourDiskNameHere.key`. Create this directory if not already present:
```sh
mkdir /etc/cryptsetup-keys.d
```
Add an additional keyfile to your newly created encrypted hard-drive:
```sh
dd bs=512 count=4 if=/dev/random of=/etc/cryptsetup-keys.d/YourDiskNameHere.key iflag=fullblock
chmod 600 /etc/cryptsetup-keys.d/YourDiskNameHere.key
cryptsetup luksAddKey /dev/sdX /etc/cryptsetup-keys.d/YourDiskNameHere.key
```
Get the UUID of your new hard-drive via `ls -l /dev/disk/by-uuid` and edit `/etc/crypttab`:
```/etc/crypttab
YourDiskNameHere UUID=TheUUIDYouJustGot /etc/crypsetp-keys.d/YourDiskNameHere.key
```
If you use `/etc/cryptsetup-keys.d` and name your keys `YourDiskNameHere.key` you could leave out the third column as this is automatically tested for.
after a `systemctl daemon-reload` you should now be able to start a service called `systemd-cryptsetup@YourDiskNameHere`.
You can verify this via a `systemctl start systemd-cryptsetup@YourDiskNameHere`.
You should not require to enter a password now.
If everything works we can now modify the `/etc/fstab` for the automatic mounting. This is done like any unencrypted hard-drive by appending:
```/etc/fstab
/dev/mapper/YourDiskNameHere /YourMountPoint ext4 defaults 0 2
```
Your encrypted drive should now automount on boot-up without an additional password-prompt.
# NOT TESTED, assumed to be the same as the LUKS1 case
## Use swap for hibernations
Add the `resume` hook in `/etc/mkinitcpio.conf`:
```/etc/mkinitcpio.conf
HOOKS=(base udev autodetect modconf kms keyboard keymap consolefont block encrypt lvm2 __resume__ filesystems fsck)
```
and rebuild via `mkinitcpio -P`.
Then: add to the `GRUB_CMDLINE_LINUX` in `/etc/default/grub`:
```/etc/default/grub
GRUB_CMDLINE_LINUX="... resume=/dev/vg/swap"
```
and rebuild GRUB.
```sh
grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB --recheck
grub-mkconfig -o /boot/grub/grub.cfg
grub-mkimage -p /boot/grub -O x86_64-efi -c /boot/grub/grub-pre.cfg -o /tmp/grubx64.efi lvm luks2 part_gpt cryptodisk gcry_rijndael argon2 gcry_sha256 ext2
install -v /tmp/grubx64.efi /efi/EFI/GRUB/grubx64.efi
```

63
docs/matlab.md Normal file
View File

@ -0,0 +1,63 @@
# matlab
## Installation via AUR
### PKGBUILD
Download PKGBUILD: `paru -G matlab`
### Licenses
- Go to [License center](https://www.mathworks.com/licensecenter) on mathworks
- On install and activate tab; select (or create) an appropriate license
- Navigate to download the license file and the file installation key
- Download the **license file** and put the file in the repository
- Copy and paste the **file installation key** in a plain text file
## create Tarball
Check, that `libselinux` and `libxcrypt-compat` are installed. Otherwise the installer will exit with error code 42 and no further instructions.
```sh
paru -S --asdeps libselinux libxcrypt-compat
```
Then:
- [Download the matlab installer](https://www.mathworks.com/downloads)
- Unpack and launch the installer
- After logging in and accepting license; select `Advanced Options > I want to download without installing` from the top dropdown menu.
- Set the download location to an empty directory called `matlab`
- Select the toolboxes you want.
After downloading; from the parent directory; do
```sh
tar cf matlab.tar matlab
```
to create the tarball. The folder here called `matlab` usually is given the download-time as it's name. Rename to `matlab` before compressing.
Move the matlab.tar to the repository.
Adjust the `pkgver` and `release` vars in the `PKGBUILD` to reflect current release.
Run `makepkg -si` to install.
### mv cannot stat error
In the case of an error in the form of:
`mv: cannot stat 'dependency_links.txt'$'\n''PKG-INFO'$'\n''SOURCES.txt'$'\n''top_level.txt': No such file or directory`
Edit line 207 of the `PKGBUILD` to include `ls -d` instead of just `ls`.
## Configuration
### fix graphics driver with intel
In the case of `libGL error: failed to open iris:`:
Add to the `matlab` script (`sudo nvim $(which matlab)`) at the top:
```sh
export MESA_LOADER_DRIVER_OVERRIDE=i965
```
### HiDPI Fix
In Matlab:
```m
s = settings;s.matlab.desktop.DisplayScaleFactor
s.matlab.desktop.DisplayScaleFactor.PersonalValue = 2
```
This value can be a float.
### Fonts malformed
Set Aliasing to true under `Preferences->MATLAB->Fonts` and reboot.

173
docs/neomutt.md Normal file
View File

@ -0,0 +1,173 @@
# Neomutt
## Markdown to HTML rendering
To write more normie-friendly emails, non-plain-text emails are probably better.
For this, a conversion from Markdown to HTML with Mathjax support seems best.
It supports all the bells and whistles of markdown (images, links, code, italics, bold) as well as mathemtical formulas in LaTex notation using Mathjax.
### Configuration
The conversion is done via pandoc using templates.
Ensure `pandoc` is installed. (`which pandoc || sudo pacman -S pandoc`)
Add to your muttrc (either in `~/.mutt/muttrc` or `~/.config/mutt/muttrc`. From now on assuming `~/.config/mutt` as config folder)
```muttrc
macro compose m \
"<enter-command>set pipe_decode<enter>\
<pipe-message>pandoc -f gfm -t plain -o /tmp/msg.txt<enter>\
<pipe-message>pandoc -s --self-contained -o /tmp/msg.html --resource-path ~/.config/mutt/templates/ --template email<enter>\
<enter-command>unset pipe_decode<enter>\
<attach-file>/tmp/msg.txt<enter>\
<attach-file>/tmp/msg.html<enter>\
<tag-entry><previous-entry><tag-entry><group-alternatives>" \
"Convert markdown to HTML5 and plaintext alternative content types"
```
Create a folder called `templates`: `mkdir -p ~/.config/mutt/templates`
and create a file called `email.html` in this folder with the following content:
```html
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="$lang$" xml:lang="$lang$"$if(dir)$ dir="$dir$"$endif$>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script type="text/javascript" id="MathJax-script" async
src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js">
</script>
<style>
$styles.html()$
</style>
$for(css)$
<link rel="stylesheet" href="$css$" />
$endfor$
<!--[if lt IE 9]>
<script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
<![endif]-->
$for(header-includes)$
$header-includes$
$endfor$
</head>
<body>
$body$
$for(include-after)$
$include-after$
$endfor$
</body>
</html>
```
### Usage
To use this, write your email as usual and afterwards, press `m` on the created file in neomutt.
This will generate a combined file for plaintext fallback in case of unsupported HTML rendering.
For now, also delete the still present plaintext file with `D`.
Your email should now be ready to be sent.
For writing formulas, just use latex syntax in the normal `$` delimiters.
Be careful on inline formulas, here a whitespace between the leading `$` and the formula breaks the rendering!
## File Size
Since Mathjax is creating a binary for the rendering of the math syntax which is embedded in the html, the file sizes are usually around 1 MB.
This is not necessary when no LaTeX syntax is used.
Create a second macro for which you use a different template, that excludes the mathjax script.
This way you can create smaller emails with pure markdown syntax and when necessary can send mathematical formulas, resulting in larger mails.
For this add the following to the muttrc:
```muttrc
macro compose l \
"<enter-command>set pipe_decode<enter>\
<pipe-message>pandoc -f gfm -t plain -o /tmp/msg.txt<enter>\
<pipe-message>pandoc -s --self-contained -o /tmp/msg.html --resource-path ~/.config/mutt/templates/ --template email_pure<enter>\
<enter-command>unset pipe_decode<enter>\
<attach-file>/tmp/msg.txt<enter>\
<attach-file>/tmp/msg.html<enter>\
<tag-entry><previous-entry><tag-entry><group-alternatives>" \
"Convert markdown to HTML5 and plaintext alternative content types"
```
Further create a new file called `email_pure.html` in `mutt/templates` with the following content:
```html
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="$lang$" xml:lang="$lang$"$if(dir)$ dir="$dir$"$endif$>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
<style>
$styles.html()$
</style>
$for(css)$
<link rel="stylesheet" href="$css$" />
$endfor$
<!--[if lt IE 9]>
<script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
<![endif]-->
$for(header-includes)$
$header-includes$
$endfor$
</head>
<body>
$body$
$for(include-after)$
$include-after$
$endfor$
</body>
</html>
```
## Khard Adress Book integration
Sadly, khard does not have a great TUI as abook, but it benefits from being able to sync with CardDav servers like Nextcloud.
For seamless integration such as adding emails and autocompleting from the address book, add the following to your muttrc (either in `~/.mutt/muttrc` or `~/.config/mutt/muttrc`. From now on assuming `~/.config/mutt` as config folder)
```muttrc
set query_command = "echo %s | xargs khard email --parsable --"
macro index,pager a \
"<pipe-message>khard add-email<return>" \
"add the sender email address to khard"
```
For syncing with CardDav servers like Nextcloud look into [NextCloud](./nextcloud.md).
## abook Adress Book integration
Add the following to the muttrc. The first line set the default query to use abook, while the second line allows us to quickly add the sender of a mail that we currently read to the adress book using `A`.
```sh
set query_command= "abook --mutt-query '%s'"
macro index,pager A "<pipe-message>abook --add-email-quiet<return>" "Add this sender to Abook"
bind editor <Tab> complete-query
```
To use abook for composing messages, we can just start a new mail, using `m`.
Now press `Ctrl + t`. This pulls up a list of abook, which we now can navigate using the arrow keys.
If you have found the recipient of choice, press enter.
Sending a mail to more recipients, you can tag them using `t` in that list.
Having selected all, press `;m` to save them and press enter.
You can also search the query from abook. Having pressed `Ctrl+t`, press `/` to search.
## Signature and GPG
To sign and/or encrypt your mails via GPG, set the following in the muttrc:
```sh
set crypt_use_gpgme=yes
set postpone_encrypt = yes
set pgp_self_encrypt = yes
set crypt_use_pka = no
set crypt_autosign = no
set crypt_autoencrypt = no
set crypt_autopgp = yes
set pgp_sign_as=0x12345678
```
The last line is the key id of the key you want to use for signing - which can be extracted from `gpg --keyid-format 0xlong -K --fingerprint`.
To send an encrypted message, import the public key of the recipient using `gpg --import <keyfile>` or `gpg --auto-key-locate keyserver --locate-keys user@example.net`
To bring up the `pgp` menu in mutt, press `p` before sending the mail.
Then select encryption, and select the recipient from the list.
TODO: delete plaintext attachment after HTML creation
TODO: remove `tmp` files after sending

560
docs/nextcloud.md Normal file
View File

@ -0,0 +1,560 @@
# Nextcloud
## Installation
We're assuming an Arch Linux installation, but the steps should be similar for other distributions.
There are two possible ways to serve Nextclouds PHP code: uWSGI and PHP-FPM.
We'll be using PHP-FPM as this is the recommended way and nginx is easier to setup with it, especially if you wish to enable additional plugins such as LDAP.
Be prepared for quite a bit of work, with too many files which look identical, but it's worth it.
This instal guide is based on the [Arch Wiki](https://wiki.archlinux.org/index.php/Nextcloud) and the [Nextcloud documentation](https://docs.nextcloud.com/server/20/admin_manual/installation/source_installation.html). It mainly emphasizes some points which go under in the Arch Wiki article.
We assume postgresql as the database backend, but you can also use mysql/mariadb (which is also the recommended way by Nextcloud). I do this because I run a lot of other stuff on postgresql already and like it :).
PostgreSQL is said to deliver better performance and overall has fewer quirks compared to MariaDB/MySQL but expect less support from Nextcloud devs and community.
Nginx is already assumed to be set up and you have a certbot certificate for your domain.
In these instructions we will use `cloud.example.com` as the domain name, but you should of course replace it with your own.
First, install the required packages:
```sh
pacman -S nextcloud
```
When asked, choose `php-legacy` as your PHP version.
```sh
pacman -S php-legacy-imagick lbrsvg --asdeps
```
### Configuration
#### PHP
```sh
cp /etc/php-legacy/php.ini /etc/webapps/nextcloud
chown nextcloud:nextcloud /etc/webapps/nextcloud/php.ini
```
enable the following extensions in `/etc/webapps/nextcloud/php.ini`:
```ini
extension=bcmath
extension=bz2
extension=exif
extension=gd
extension=iconv
extension=intl
extension=sysvsem
; in case you installed php-legacy-imagick (as recommended)
extension=imagick
```
Set date.timezone. For example:
```ini
date.timezone = Europe/Zurich
```
Raise PHP memory limit to at least 512MB:
```ini
memory_limit = 512M
```
Limit Nextcloud's access to the filesystem:
```ini
open_basedir=/var/lib/nextcloud:/tmp:/usr/share/webapps/nextcloud:/etc/webapps/nextcloud:/dev/urandom:/usr/lib/php-legacy/modules:/var/log/nextcloud:/proc/meminfo:/proc/cpuinfo
```
#### Nextcloud
In `/etc/webapps/nextcloud/config/config.php` add:
```php
'trusted_domains' =>
array (
0 => 'localhost',
1 => 'cloud.example.com',
),
'overwrite.cli.url' => 'https://cloud.example.com/',
'htaccess.RewriteBase' => '/',
```
#### System and environment
To make sure the Nextcloud specific `php.ini` is used by the `occ` tool set the environment variable `NEXTCLOUD_PHP_CONFIG`:
```sh
export NEXTCLOUD_PHP_CONFIG=/etc/webapps/nextcloud/php.ini
```
And also add this to your `.bashrc` or `.zshrc` (whichever is your shell) to make it permanent.
As a privacy and security precaution create the dedicated directory for session data:
```sh
install --owner=nextcloud --group=nextcloud --mode=700 -d /var/lib/nextcloud/sessions
```
#### PostgreSQL
I'm assuming you already have postgres installed and running. (Till feel free to improve this section)
For additional security in this scenario it is recommended to configure PostgreSQL to only listen on a local UNIX socket:
In `/var/lib/postgres/data/postgresql.conf`:
```
listen_addresses = ''
```
Especially do not forget to initialize your database with `initdb` if you have not setup postgresql yet.
Now create a database and user for Nextcloud:
```sh
su - postgres
psql
CREATE USER nextcloud WITH PASSWORD 'db-password';
CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
ALTER DATABASE nextcloud OWNER TO nextcloud;
GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;
\q
```
and of course replace `db-password` with a strong password of your choice.
Additionally install `php-legacy-pgsql`:
```sh
pacman -S php-legacy-pgsql --asdeps
```
and enable this in /etc/webapps/nextcloud/php.ini:
```ini
extension=pdo_pgsql
```
Now setup Nextcloud's database schema with:
```sh
occ maintenance:install \
--database=pgsql \
--database-name=nextcloud \
--database-host=/run/postgresql \
--database-user=nextcloud \
--database-pass=<db-password> \
--admin-pass=<admin-password> \
--admin-email=<admin-email> \
--data-dir=/var/lib/nextcloud/data
```
and adjust the appropriate values in `<>` to your specific setup.
Congrats, you now have nextcloud setup. Currently it is not yet being served, for this we need to continue with our fpm and nginx setup.
#### FPM
Install `php-legacy-fpm`:
```sh
pacman -S php-legacy-fpm --asdeps
```
##### php-fpm.ini
We don't want to use the default php.ini for php-fpm, but a dedicated one. Hence we first copy the default php.ini to a dedicated one:
```sh
cp /etc/php-legacy/php.ini /etc/php-legacy/php-fpm.ini
```
Enable opcache in `/etc/php-legacy/php-fpm.ini`:
```ini
zend_extension=opcache
```
And set the following parameters under `[opcache]` in `/etc/php-legacy/php-fpm.ini`:
```ini
[opcache]
opcache.enable = 1
opcache.interned_strings_buffer = 8
opcache.max_accelerated_files = 10000
opcache.memory_consumption = 128
opcache.save_comments = 1
opcache.revalidate_freq = 1
```
This should differ from the default only in `opcache.revalidate_freq` but be sure to uncomment all of them anyways.
#### nextcloud.conf
Next you have to create a so called pool file for FPM. It is responsible for spawning dedicated FPM processes for the Nextcloud application. Create a file `/etc/php-legacy/php-fpm.d/nextcloud.conf`.
You can use the file in this repository as a template [Here a link](../static/nextcloud/nextcloud.conf). It should work out of the box without any modifications.
Create the access log directory:
```sh
mkdir -p /var/log/php-fpm-legacy/access
```
#### Systemd service
To overwrite the default php-fpm-legacy service create a file in `/etc/systemd/system/php-fpm-legacy.service.d/override.conf` with the following content:
```ini
[Service]
ExecStart=
ExecStart=/usr/bin/php-fpm-legacy --nodaemonize --fpm-config /etc/php-legacy/php-fpm.conf --php-ini /etc/php-legacy/php-fpm.ini
ReadWritePaths=/var/lib/nextcloud
ReadWritePaths=/etc/webapps/nextcloud/config
```
Now you can `systemctl enable --now php-fpm-legacy`.
##### Keep /etc tidy
As a small bonus you can remove the unnecessary uwsgi config files by adding this to `/etc/pacman.conf`:
```
# uWSGI configuration that comes with Nextcloud is not needed
NoExtract = etc/uwsgi/nextcloud.ini
```
#### Nginx
Finally we're at the nginx part and are almost ready to test our setup.
We're assuming you have a working nginx setup with a certbot certificate for your domain and possible domains are in `/etc/nginx/sites-available/` and symlinked to `/etc/nginx/sites-enabled/` to enable them (like Debian).
The nextcloud documentation has a great [example nginx configuration](https://docs.nextcloud.com/server/20/admin_manual/installation/source_installation.html#example-nginx-configuration) which we will use as a base.
You can find the modified version in this repository [here](../static/nextcloud/nextcloud_nginx).
Simply copy this file into `/etc/nginx/sites-available/nextcloud`, replace `cloud.example.com` with your domain, and symlink it to `/etc/nginx/sites-enabled/nextcloud`.
You should now be able to restart nginx and access your nextcloud instance at https://cloud.example.com.
##### Strict Transport Security
For additional security, if everything works fine and you're happy with your domain you can uncomment the HSTS section in the nginx setup.
```nginx
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
```
#### Background jobs
Nextcloud requires certain tasks to be run on a scheduled basis. See Nextcloud's documentation for some details. The easiest (and most reliable) way to set up these background jobs is to use the systemd service and timer units that are already installed by nextcloud.
Override to the correct php version by adding the file `/etc/systemd/system/nextcloud-cron.service.d/override.conf` with the following content:
```ini
[Service]
ExecStart=
ExecStart=/usr/bin/php-legacy -c /etc/webapps/nextcloud/php.ini -f /usr/share/webapps/nextcloud/cron.php
```
After that enable and start nextcloud-cron.timer (not the service).
```sh
systemctl enable --now nextcloud-cron.timer
```
### Performance Improvements by in-memory caching
Nextcloud's documentation recommends to apply some kind of in-memory object cache to significantly improve performance.
You are able to use both APCu and Redis simultaneously for caching. The combination should be faster than either one alone.
#### APCu
Install `php-legacy-apcu`:
```sh
pacman -S php-legacy-apcu --asdeps
```
Uncomment the follwing in `/etc/php-legacy/conf.d/apcu.ini`:
```ini
extension=apcu.so
```
In `/etc/webapps/nextcloud/php.ini` enable the following extensions by uncommenting this:
```ini
extension=apcu
apc.ttl=7200
apc.enable_cli = 1
```
Order is relevant so uncomment, don't add.
in `/etc/php-legacy/php-fpm.d/nextcloud.conf` uncomment the following under `[nextcloud]`:
```ini
php_value[extension] = apcu
php_admin_value[apc.ttl] = 7200
```
Restart your application server:
```sh
systemctl restart php-fpm-legacy
```
Add to `/etc/webapps/nextcloud/config/config.php `:
```php
'memcache.local' => '\OC\Memcache\APCu',
```
to the `CONFIG` array. (So `);` should be after this)
A second application server retart is required and everything should be working.
```sh
systemctl restart php-fpm-legacy
```
#### Redis
Install redis and the php-legacy extensions:
```sh
pacman -S redis
pacman -S php-legacy-redis php-legacy-igbinary --asdeps
```
Adjust the following in `/etc/redis.conf`:
```ini
protected-mode yes # only listen on localhost
port 0 # only listen on unix socket
unixsocket /run/redis/redis.sock
unixsocketperm 770
```
The rest should be able to stay as is.
Start and enable the redis service:
```sh
systemctl enbale --now redis
```
and check that it is running:
```sh
systemctl status redis
```
Also check that the socket is created:
```sh
ls -l /run/redis/redis.sock
```
You can also run a sanity check by connecting to the socket:
```sh
redis-cli -s /run/redis/redis.sock ping
```
(You should get a `PONG` response)
If everything works fine on the redis side, we can now configure php to use it.
In `/etc/php-legacy/conf.d/redis.ini` uncomment the following:
```ini
extension=redis
```
and analogously in `/etc/php-legacy/php-fpm.d/igbinary.ini`:
```ini
[igbinary]
extension=igbinary.so
igbinary.compact_strings=On
```
Now we can configure Nextcloud to use redis as a cache.
First, add the nextcloud user to the redis group:
```sh
usermod -a -G redis nextcloud
```
You can verify that nextcloud now has access to the redis socket by running:
```sh
sudo -u nextcloud redis-cli -s /run/redis/redis.sock ping
```
In `/etc/webapps/nextcloud/php.ini` uncomment the following:
```ini
; REDIS
extension=igbinary
extension=redis
```
and add the redis unix socket directory to the `open_basedir` directive:
```ini
open_basedir = <your_current_value>:/run/redis
```
In /etc/webapps/nextcloud/config/config.php add the following to the `CONFIG` array:
```php
'memcache.distributed' => '\\OC\\Memcache\\Redis',
'filelocking.enabled' => 'true',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' =>
array (
'host' => '/run/redis/redis.sock',
'port' => 0,
),
```
And finally in `/etc/php-legacy/fpm.d/nextcloud.conf` uncomment:
```ini
php_value[extension] = igbinary
php_value[extension] = redis
```
Also, add to the `open_basedir` directive the redis unix socket directory:
```ini
php_value[open_basedir] = <your_current_value>:/run/redis
```
Restart your application server:
```sh
systemctl restart php-fpm-legacy
```
Check that everything works by visiting cloud.example.com and checking the admin overview page.
If you have an internal server error and are not even able to access cloud.example.com, check the nginx error log for details.
### Do not bruteforce throttle local connections
You might see in your admin overview (https://cloud.example.com/settings/admin/overview) an error message like this:
Your remote address was identified as "192.168.1.1" and is bruteforce throttled at the moment slowing down the performance of various requests. If the remote address is not your address this can be an indication that a proxy is not configured correctly. Further information can be found in the documentation ↗.
This is because Nextcloud is not able to detect the specific local machine you're connecting from and hence throttles all local connections.
The underlying issue is not Nextcloud but your Network setup, specifically your router not allowing for the disabling of NAT Loopback.
Discussion of this problem can be found here: https://help.nextcloud.com/t/all-lan-ips-are-shown-as-the-router-gateway-how-can-i-get-the-actual-ip-address/134872
Your solution: Set up a local DNS server and resolve your domain to your local IP address, not the public one.
A simple appraoch would be to use dnsmasq for this.
See [my dnsmasq.md](./dnsmasq.md) for more details on how to set this up.
## Syncing files with Nextcloud
They GUI for syncing is surprisingly unusable, luckily the CLI is much better.
On Arch Linux you can install the `nextcloud-client` package.
Syncing should now be a simple
```
nextcloudcmd -u "email@example.com" --password "$(pass <your_password_path> | head -n1)" <local_folder_for_syncing> https://cloud.example.com
```
Of course adjust to your setup.
Adding `-s` will make it sync a bit less verbose.
## Setup a drop-off folder in Nextcloud
This is a quite useful feature to allow others to upload files to your Nextcloud without having to create an account.
Very user-friendly for non-technical people to share high-resolution photos for example.
The share link can also be password-protected such that not everyone can upload files to your server.
1. Create a folder in Nextcloud, e.g. `Drop-off`.
2. Click on the share icon and under share link select "File-drop". This will create a link that you can share with others.
3. Optional: If you want to password protect the link, click on "Advanced settings" under the Sharing tab for the folder detailsand use a password of your choice.
### Human-readable link with redirect
If you want a nice human-readable link you can use your own nginx for this.
Add to your existant server block with port 443 in `/etc/nginx/sites-available/nextcloud` or your domain of choice with the following content:
```nginx
location /dropoff {
return 301 <your nextcloud share link>;
}
```
## Sync contacts with khard
We are using `vdirsyncer` to sync our contacts with Nextcloud. For this, install it:
```sh
sudo pacman -S vdirsyncer
```
Then create a config file `~/.config/vdirsyncer/config` with the following content:
```
[general]
status_path = "~/.config/vdirsyncer/status/"
[pair nextcloud_contacts]
a = "nextcloud_contacts_local"
b = "nextcloud_contacts_remote"
collections = ["from a", "from b"]
[storage nextcloud_contacts_local]
type = "filesystem"
path = "~/.local/share/vdirsyncer/"
fileext = ".vcf"
[storage nextcloud_contacts_remote]
type = "carddav"
url = "https://cloud.example.com/remote.php/dav/addressbooks/users/<your_user>/contacts/"
auth = "basic"
username = "<your_user>"
password.fetch = ["shell", "pass <your_password_path> | head -n1"]
```
Note that <your_user> is not your email address but the username you can also use to login into nextcloud.
You can find it under https://cloud.example.com/settings/users as the smaller text under your display name.
Add to your `~/.config/khard/khard.conf`:
```
[addressbooks]
[[nextcloud]]
path = ~/.local/share/vdirsyncer/contacts/
```
And create `~/.local/share/vidirsyncer/contacts` if not already existing.
We will use this folder to store our contacts.
Initial discovery requires you to run
```sh
vdirsyncer discover nextcloud_contacts
```
once.
You should now be able to sync your contacts with `vdirsyncer sync` and view them with `khard`.
### Cronjob
You can now of course add `vdirsyncer sync` to your cronjob to sync your contacts regularly.
Keep in mind that this will require additional environment variables for pass to work as well, sourcing your `.zprofile` should do the trick with a correct setup.
Ergo your cronjob should look something like this:
```cron
*/15 * * * * . ~/.zprofile && vdirsyncer sync
```
See [neomutt.md](./neomutt.md) for more details on how to use khard with neomutt for autocompletion.
## Sync Calendar with Calcurse
Create a config file `~/.config/calcurse/caldav/config`. You can use the following template:
```
# If you want to synchronize calcurse with a CalDAV server using
# calcurse-caldav, create a new directory at $XDG_CONFIG_HOME/calcurse/caldav/
# (~/.config/calcurse/caldav/) and $XDG_DATA_HOME/calcurse/caldav/
# (~/.local/share/calcurse/caldav/) and copy this file to
# $XDG_CONFIG_HOME/calcurse/caldav/config and adjust the configuration below.
# Alternatively, if using ~/.calcurse, create a new directory at
# ~/.calcurse/caldav/ and copy this file to ~/.calcurse/caldav/config and adjust
# the configuration file below.
[General]
# Path to the calcurse binary that is used for importing/exporting items.
Binary = calcurse
# Host name of the server that hosts CalDAV. Do NOT prepend a protocol prefix,
# such as http:// or https://. Append :<port> for a port other than 80.
Hostname = cloud.example.com
# Path to the CalDAV calendar on the host specified above. This is the base
# path following your host name in the URL.
Path = /remote.php/dav/calendars/<your_username>/<your_calendar_name>/
# Type of authentication to use. Must be "basic" or "oauth2"
#AuthMethod = basic
# Enable this if you want to skip SSL certificate checks.
InsecureSSL = No
# Disable this if you want to use HTTP instead of HTTPS.
# Using plain HTTP is highly discouraged.
HTTPS = Yes
# This option allows you to filter the types of tasks synced. To this end, the
# value of this option should be a comma-separated list of item types, where
# each item type is either "event", "apt", "recur-event", "recur-apt", "todo",
# "recur" or "cal". Note that the comma-separated list must not contain any
# spaces. Refer to the documentation of the --filter-type command line argument
# of calcurse for more details. Set this option to "cal" if the configured
# CalDAV server doesn't support tasks, such as is the case with Google
# Calendar.
#SyncFilter = cal,todo
SyncFilter = cal
# Disable this option to actually enable synchronization. If it is enabled,
# nothing is actually written to the server or to the local data files. If you
# combine DryRun = Yes with Verbose = Yes, you get a log of what would have
# happened with this option disabled.
DryRun = No
# Enable this if you want detailed logs written to stdout.
Verbose = Yes
# Credentials for HTTP Basic Authentication (if required).
# Set `Password` to your password in plaintext (unsafe),
# or `PasswordCommand` to a shell command that retrieves it (recommended).
[Auth]
Username = alexander@bocken.org
# Password = <your_password>
# PasswordCommand = # Does not appear to work
# Optionally specify additional HTTP headers here.
#[CustomHeaders]
#User-Agent = Mac_OS_X/10.9.2 (13C64) CalendarAgent/176
# Use the following to synchronize with an OAuth2-based service
# such as Google Calendar.
#[OAuth2]
#ClientID = your_client_id
#ClientSecret = your_client_secret
# Scope of access for API calls. Synchronization requires read/write.
#Scope = https://example.com/resource/scope
# Change the redirect URI if you receive errors, but ensure that it is identical
# to the redirect URI you specified in the API settings.
#RedirectURI = http://127.0.0.1
```
The `Path` variable is simply the path you get when your click on the edit button for the calendar in the web interface and copy the "Internal link".
Adjusting the username and calendar name in the above template should also simply work:
You can find your username as described in the khard section.
The calendar name is the name you gave your calendar in the web interface all lower case.
For Authentication I could not get the `PasswordCommand` to work. Simply storing the password using the Password option is of course not recommended.
Luckily there is the `CALCURSE_CALDAV_PASSWORD` environment varibale which we can set programmatically instead.
To initialize the setup run now:
```sh
CALCURSE_CALDAV_PASSWORD=$(pass <nextcloud_password_path>) calcurse-caldav --init=two-way
```
And for future syncing a simple
```sh
CALCURSE_CALDAV_PASSWORD=$(pass <nextcloud_password_path>) calcurse-caldav
```
does the trick.
Like with `khard` you can now add this to your cronjob to sync your calendar regularly and will also require a sorucing of `~/.zprofile` to work with `pass`. Maybe a wrapper script is appropriate here.
See my [syncclouds.sh script as an example](https://bocken.org/git/Alexander/dotfiles/src/branch/master/.local/bin/syncclouds.sh) which also handles corrupted lockfiles because of unexpected aborts.
TODO: investigate wheter todos are possible to also be synced. Could not get it working myself.
### Sync to Android
If you wish to sync your calendar to your Android phone, you can use the [DAVx⁵](https://www.davx5.com/) app. Contacts are also possible

46
docs/nvidia.md Normal file
View File

@ -0,0 +1,46 @@
# Nvidia
Good luck.
## Installation
Arch: install the `nvidia` package.
## Configuration
### Minimal xorg setup for only running on Nvidia GPU
This minimal configuration should get you started. Add this in `/etc/X11/xorg.conf.d` in a file similar to `10-nvidia-drm-outputclass.conf`
```xf86config
Section "OutputClass"
Identifier "intel"
MatchDriver "i915"
Driver "modesetting"
EndSection
Section "OutputClass"
Identifier "nvidia"
MatchDriver "nvidia-drm"
Driver "nvidia"
Option "AllowEmptyInitialConfiguration"
Option "PrimaryGPU" "yes"
ModulePath "/usr/lib/nvidia/xorg"
ModulePath "/usr/lib/xorg/modules"
EndSection
```
### Scaling without overscan on PRIME displays
If you cannot use `xrandr --scale ` without leading to over/underscan you need to adjust a kernel parameter:
create a file in `/etc/modprobe.d` (for example called `nvidia-drm-nomodeset.conf`) with the following content.
```xf86config
options nvidia-drm modeset=1
```
and rebuild your kernel via
```sh
sudo mkinitcpio -P
```
After a reboot this should enable scaling for PRIME displays.

82
docs/pass.md Normal file
View File

@ -0,0 +1,82 @@
# Pass
Pass is a password manager that follows the UNIX philosophy of doing one thing and doing it well. It is designed to be simple and easy to use, while still being secure and flexible.
It is basically just a simple shell-script, working on files.
The main idea is to have a bunch of gpg encrypted files, storing the passwords.
These files can then be synced using your favourite way, being it git, syncthing or anything else.
Or just kept locally on your machine.
In the end - its just a file, or a bunch of them.
This allows you to not rely on the good security practice of a large company, which is a primary target for attacks.
Pass has several very useful extensions, allowing easy access, generation of OTP for 2FA and more.
## Install
### Generate a gpg key
1. If you already have a gpg key, you are done here. If not, lets generate a key:
```sh
gpg --full-gen-key
```
2. Select your key type (if no idea what, choose RSA).
3. Select a 4096 bit long key
4. Your key should not expire. So select the corresponding option (usually 0)
5. Name your key and add an email. This email does not have to be your real one, but this key can also be used to sign/encrypt mails. If this is your plan, choose the mail address you plan to use with this key.
6. Add a password to the key (keep blank for an empty password)
### Install on Arch
```sh
pacman -S pass pass-otp
```
### Setup
1. We want to set up pass. For this we run the following command. This tells pass to use the gpg key connected to the email address given.
```sh
pass init <email_used_for_gpg_key>
```
### Usage
1. **Adding passwords**. To do this, type the following command. Here we use a name to identify which password this is. Usually this is the service/website/program/file/... this password is used for. If several accounts exists for one service, one can also created nested structure like `serviceA/account1` and `serviceA/account2`. This will just create a folder called `serviceA` and put the corresponding files in there. After running below command, it asks you to type the password you want to store.
```sh
pass add <name_linked_to_password>
```
2. **Retrieving the password**. To look up the password, simply run the command below. It may be that a prompt asks you to type in your GPG key-pair password.
```sh
pass <name_linked_to_password>
```
### Quality of life improvements
1. **passmenu**. If you use `dmenu`, install [this](https://tools.suckless.org/dmenu/scripts/passmenu2) script to enable a dmenu friendly list. Just type a substring of the file name, and this script copies the contents to your clipboard. For OTP this automatically generates the code and copies it to your clipboard. If the file contains two lines, the second line is copied in to your selection. This is useful to store user names or similar information. Bind this script to a keyboard shortcut for actual usability.
2. **One Time Passwords/Multi Factor Authentication**. Most of the time you get a QR code that you should scan with something like microsoft authenticator. Save this qr code as an image, and run it through `zbarimg` (Installed via `pacman -S zbar`). This returns an uri starting `otpauth://...`. Create a new "password" using `pass otp add <otp_password_file>`, and paste the uri as the password. Now run `pass otp <otp_password_file>`. This generates the one time password. Again, this works with passmenu script above. Maybe you have to change the script linked to adjust to your naming convention of otp files.
3. **Syncing**: Usually you want to have your passwords in more than one place. Laptop and Phone are a very common setup. For android you have several options.
The most straight forward, and probably safest way, is to copy the files to your device and also copy over the private key.
This key is then imported in to an app like [OpenKeyChain](https://www.openkeychain.org/). Now you can open these files using this app.
But this comes with a harsh drawback on usability.
Another setup would be a private git repo, which you can clone to different devices.
Again, on android [Password Store](https://passwordstore.app/) is a very powerful tool, which allows you to auto-insert in browsers and also generate the OTP.
To set up a git sync, you enable it with pass using `pass git init`. Then add the remote repo as origin using `pass git remote add origin user@service:pos`.
Now this is set up and `pass git push` auto-commits and pushes to the remote repo. `pass git pull` pulls from there.
In Password Store you can now clone from this repo and use the key you imported to OpenKeyChain to decrypt the passwords!
On iOS I don't know of a similar setup, but am happy to take in recommendations!
### Useful commands
- `pass list` : Shows the folder structure of all stored passwords
- `pass grep <...>` : Searches for a files including the search string when decrypted
- `pass edit <...>` : If a password changed, this allows to edit the file.
- `pass generate <...>` : In need of a new password? Just let pass generate a secure one
- You are able to use pass in a script, for example to enter secret information automatically without keeping it in clear text.

64
docs/php.md Normal file
View File

@ -0,0 +1,64 @@
# PHP
An easily integratable language for dynamic HTML with read/write file access possible on the server side.
# Installation
As always, we're assuming Debian + Nginx for this.
```sh
apt update
apt install php php-fpm
```
`php-fpm` should automatically enable it's service.
Verify via `systemctl status php7.3-fpm.service`
# Setup
Check whether you want to use a TCP connection or a UNIX socket for php connections.
The default and recommended way is TCP/IP.
## TCP/IP
You can edit the IP and port of the connection in `/etc/php/7.3/fpm/pool.d/www.conf`
The default is:
```
listen = 127.0.0.1:9000
```
## Socket
For socket, use:
```
listen = run/php/php7.3-fpm.sock
```
## Nginx
To enable nginx to talk to php add the following to your website config:
```nginx
location ~ \.php${
include snippets/fastcgi-php.conf
fastcgi_pass 127.0.0.1:9000;
}
```
replace TCP/IP address with the appropriate socket file if that's your preferred setup.
Afterwards, since you've modified the nginx config, this of course requires a `systemctl restart nginx`.
Tip: `nginx -t` let's you verify your syntx without killing the running nginx instance, leading to a smoother switchover.
Create a file in the root dir for your website (so probably somwhere in `/var/www/`) ending in `.php` with the content:
```php
<?php
phpinfo();
```
And visit `example.com/file.php` to see whether it worked.
You should get a screen with a lot of information about your php installation.
## File writing permissions
Per default PHP is unable to read or write to your server drive.
It is best for this to re-own any directories where php will be writing to to the user and group `www-data`.
Thus a
```sh
chown -R www-data:www-data <dir>
chmod -R 744 <dir>
```
should be a good starting-off point.
Files only need to have permissions of `644` of course so maybe change that as well.
# Learning PHP
If you're completely new to php [w3schools' course](https://www.w3schools.com/php) is probably a good starting point.

View File

@ -20,11 +20,54 @@ qutebrowser --nowindow ':adblock-update;;later 10000 download-clear'
```
will update the adblock lists without starting a qutebrowser window.
## Setting spellcheck languages
This is a bit more involved since it requires a script that can only be found in the source code of qutebrowser.
1. Download the qutebrowser source code: `git clone https://github.com/qutebrowser/qutebrowser`
2. `cd qutebrowser`
3. Install the wanted languages e.g. `python -m scripts.dictcli install en-GB`
4. set spellcheck to the wanted languages in qutebrowser.
Qutebrowser can also use multiple languages by parsing a list:
`:set spellcheck.languages '["en-GB", "de-DE"]'`
## Greasemonkey scripts
To add scripts such as 4chanX to qutebrowser add the Js file to `${XDG_DATA_HOME:-$HOME/.local/share}/qutebrowser/greasemonkey`. For 4chanX this would be:
To add scripts such as 4chanX to qutebrowser add the Js file to `${XDG_DATA_HOME:-$HOME/.local/share}/qutebrowser/greasemonkey`.
### 4chanX
For 4chanX this would be:
```sh
wget -P ${XDG_DATA_HOME:-$HOME/.local/share}/qutebrowser/greasemonkey https://www.4chan-x.net/builds/4chan-X.user.js
```
followed by a `:greasemonkey-reload` in qutebrowser to activate the newly added Java scripts.
### Skip Youtube Ads
Automatically mute, speed up (at least 10x) and skip video ads on youtube.
There are multiple versions out there that try to accomplish the same thing.
Various versions can be found in [this github issue thread](https://github.com/qutebrowser/qutebrowser/issues/6480#issuecomment-876759237).
For me personally version 1.0.0 seems to work best.
Thus, create a file in `${XDG_DATA_HOME:-$HOME/.local/share}/qutebrowser/greasemonkey` with the following content:
```js
// ==UserScript==
// @name Auto Skip YouTube Ads
// @version 1.0.0
// @description Speed up and skip YouTube ads automatically
// @author jso8910
// @match *://*.youtube.com/*
// @exclude *://*.youtube.com/subscribe_embed?*
// ==/UserScript==
setInterval(() => {
const btn = document.querySelector('.videoAdUiSkipButton,.ytp-ad-skip-button')
if (btn) {
btn.click()
}
const ad = [...document.querySelectorAll('.ad-showing')][0];
if (ad) {
document.querySelector('video').playbackRate = 10;
}
}, 50)
```
followed by a `:greasemonkey-reload` in qutebrowser.

105
docs/rainloop.md Normal file
View File

@ -0,0 +1,105 @@
# General
[Rainloop](https://www.rainloop.net/) is a web-based email client that works with your local install of dovecot etc. Its easy to install and use.
# Setting up LEMP Stack
1. `apt install mariadb-server`
2. `systemctl enable mysql`
3. `apt install php php7.3-fpm php7.3-mysql -y`
4. `systemctl enable php7.3-fpm` To test the php setup add the following to your site-available nginx folder. Restart nginx using `systemctl restart nginx` and add a new page called `index.php` to your homepage directory with `<?php phpinfo();?>` as the only content. If the php install worked fine, this will show you the installed php packages. Delete this afterwords.
```
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
}
```
# Installing rainloop
1. `apt install php7.3-{curl,xml}`
2. `wget http://www.rainloop.net/repository/webmail/rainloop-community-latest.zip`
3. `mkdir /var/www/html/rainloop`
4. `unzip rainloop-community-latest.zip -d /var/www/html/rainloop/`
5. `find /var/www/html/rainloop/ -type d -exec chmod 755 {} \;`
6. `find /var/www/html/rainloop/ -type f -exec chmod 644 {} \;`
7. `chown -R www-data.www-data /var/www/html/rainloop/`
8. Edit the `nginx` entry for the webmail : `vim /etc/nginx/sites-available/rainloop.conf`. Make sure that the `php` version you installed above matches the php version in line 20. It also should match the php version of the LEMP stack. Also change the hostname accordingly.
```sh
server {
listen 80;
server_name webmail.hostname.xyz;
root /var/www/html/rainloop;
access_log /var/log/rainloop/access.log;
error_log /var/log/rainloop/error.log;
index index.php;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_index index.php;
fastcgi_split_path_info ^(.+\.php)(.*)$;
fastcgi_keep_conn on;
fastcgi_pass unix:/var/run/php/php7.3-fpm.sock;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ /\.ht {
deny all;
}
location ^~ /data {
deny all;
}
}
```
10. `mkdir /var/log/rainloop`
11. `nginx -t`
12. `ln -s /etc/nginx/sites-available/rainloop.conf /etc/nginx/sites-enabled/`
13. `systemctl reload nginx`
# Configure RainLoop
1. Go to `http:/webmail.hostname.xyz/?admin`. Here a webinterface should pop up (If not - ty to check the php install - all same versions? Is php accessible? Are the permissions set correctly?
2. Log in using `admin` and `12345`. Strongly recommend to change that one as soon as you log in. This can be done under `Security` in the left menu.
3. Under `Domains` add your local domains, ports and authentication method and delete the defaults.
4. Now you should be able to log in to the client on `webmail.hostname.xyz` using your email address and password.
# Add database for contacts
1. `mysql -uroot -p`
2. Add a database (copy paste each single line - change `rainlooppassword` to something proper
```sh
create database rainloopdb;
GRANT ALL PRIVILEGES ON rainloopdb.* TO 'rainloopuser'@'localhost' IDENTIFIED BY 'rainlooppassword';
flush privileges;
quit
```
3. Go to the admin panel to `Contacts` and activate the data base
4. Select storage `mysql` and choose as DSN `mysql:host=localhost;port=3306;dbname=rainloopdb`. The user name is `rainloopuser` and the password the password you used to set up the database.
# Certbot
Give the webmail client proper security using `certbot --nginx` to extend your certificate.
# Increasing the upload limit
To increase the maximal upload through the rainloop interface to 100 MB, we do:
1. `vim /etc/php/7.3/fpm/php.ini`
- Set `upload_max_filesize` to `100M`
- Set `post_max_size` to `100M`
2. `systemctl restart php7.3-fpm`
3. `vim /etc/nginx/nginx.conf`
- Set `client_max_body_size` to `100M`
4. `systemctl restart nginx`
5. Go to `http:/webmail.hostname.xyz/?admin` and under `General` set `Upload size limit` to `100M`
- Here you can also see if the php settings worked out.

117
docs/restic.md Normal file
View File

@ -0,0 +1,117 @@
# Restic
Resitc is an encrypted, compressed and easily usable backup system.
## Install Requirements
- Only need to install restic on the **local** machine! All the other stuff is just ssh. The server is used as a network attached disk.
- Upside: minimal work on the server
- Downside: No easy way to check online for this
```sh
pacman -S restic
```
## Setup
To set up a repository, the name of a backup unit in restic, run on your local machine
```sh
restic -r sftp:user@backupserver.lan:/backups/machine_id init
```
This initializes (same way as git) the server side under the path `/backups/machine_id`.
You can also initalize it with a different local path (i.e. Harddrive) using
```sh
restic init --repo /path/backups
```
For more details, [RTFM](https://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#sftp).
## Backup Methods
To back up your system, you can use restic_files and the following command
```sh
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" --files-from ~/.config/restic/restic_files --no-scan backup
```
`restic_files` is just a file containing the *patterns* or *paths* of the things to back up.
You can also use the usual ssh config for using specific hostnames, users and ports.
You can automate this using a simple cron-job, which runs with the regularity you like.
The `--no-scan` option is useful to save some I/O overhead.
For more details, [RTFM](https://restic.readthedocs.io/en/latest/040_backup.html).
## Restoring from Backups
To restore a full backup, run
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" --verbose restore SNAPSHOTNUMBER --target /your/fav/path
```
The snapshot number is the snapshot id you want to restore to, which you get by using
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" snapshots
```
This gives you a list of the snapshots with the dates and id's.
You can use `--exclude` and `--include` for the specific inclusion/exclusion of single files or folders. This allows to restore **single files**.
Here the files/folders have to be given using the path inside the snapshots. If you dont remember them, use `restic -r ..... ls latests` or `restic -r ... find filename`.
You can also mount the snapshots using
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" mount /your/fav/mountpoint
```
With this, you can browse the different snapshots. For this [`fusermount`](https://archlinux.org/packages/extra/x86_64/fuse2/) has to be installed.
For more details, [RTFM](https://restic.readthedocs.io/en/latest/050_restore.html).
## Keeping an overview
You can **list** all snapshots using
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" snapshots
```
You should regularly **check the health** of your backups! This can be done by
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" check
```
This however just checks if the structure is okay. If you want to check, if all the data files are unmodified and in tact, this can be done using
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" check --read-data
```
This however might take some time.
If you want to **remove** some files from the snapshots, you can use
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" rewrite --exclude /path/to/wrongly/added/file SNAPSHOTNUMBER
```
[RTFM](https://restic.readthedocs.io/en/latest/045_working_with_repos.html) for more info.
If you want to remove complete snapshots, either because they are old enough that you dont care anymore, or for other reasons, this can be done using
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" forget SNAPSHOTNUMBER
```
To also delte the data that is not needed anymore by any snapshot, run
```
restic -r sftp:user@backupserver.lan:/backups/machine_id --password-command "pass homeserver/restic/T490" prune
```
To combine both, use the `--prune` flag for the `forget` command.
See [here](https://restic.readthedocs.io/en/latest/060_forget.html) for more info.
The selection can be automated using `--keep-last` and `--keep-{hourly, daily, weekly, monthly, yearly}` flags to the `forget` command. For details see [here](https://restic.readthedocs.io/en/latest/060_forget.html#removing-snapshots-according-to-a-policy).

View File

@ -1,6 +1,4 @@
# General
SSH is a helper utensil to connnect to remote servers.
The basic syntax is
```
ssh user@domain
@ -51,19 +49,27 @@ If you need to connect to an access server before connecting to the actual serve
```
IgnoreUnknown AddKeysToAgent,UseKeychain
```
## All EXEMPLUM-COMPANY
```
Host EXEMP*
User username
IdentityFile ~/.ssh/id_rsa
AddKeysToAgent yes
UseKeychain yes
```
## Access server
```
Host EXEMPaccess
HostName login.example.com
```
## Working server
```
Host EXEMPwork
HostName work.example.com
proxycommand ssh -CW %h:%p EXEMPaccess ## access server
@ -71,3 +77,16 @@ Host EXEMPwork
```
to your `~/.ssh/config`.
To connect to the working server, just type `ssh EXEMPwork`.
## Share your clipboard with the server
To be able to copy/paste between server and client we need to install `xclip` and `xorg-clipboard` on the server. (Arch: `pacman -S xclip xorg-clipboard`)
Ensure that the server has enabled X11 forwarding by adding `X11Forwarding yes` to `/etc/ssh/sshd_config` and restarting the sshd service.
You should now be able to share the clipboard via `ssh -XY user@domain` or by making it permanent adding
the following to the corresponding Host block in your `~/.ssh/config`:
```
ForwardX11 yes
ForwardX11Trusted yes
```

View File

@ -2,14 +2,40 @@
Install instructions, configuration methods and much more for the setup of an usefull operating system.
Happy to accept pull requests for new topics!
# Programs
# Desktop Programs
- [Laptop Setup](docs/LaptopSetup.md) General tips and tricks around the quirks of Arch on a Laptop.
- [qutebrowser](docs/qutebrowser.md)
A highly customizable keyboard focused webbrowser using vim bindings
highly customizable keyboard focused webbrowser using vim bindings
- [vimwiki](docs/vimwiki.md)
A wiki script for vim
- [weechat](docs/weechat.md) A TUI client for matrix
wiki script for vim
- [weechat](docs/weechat.md) TUI client for matrix
- [git](docs/GIT.md) version control software
- [neomutt](docs/neomutt.md) highly customizable TUI email client
- [nvidia](docs/nvidia.md) Various recommendations for setting up NVIDIA drivers
- [matlab](docs/matlab.md) A proprietary but extensive python alternative with integrated IDE
- [JohnTheRipper](docs/johntheripper.md) A password cracker
- [pass](docs/pass.md) A password manager
- [beancount](docs/beancount.md) A ledger for text-file bookkeeping with a lot of features
- [LUKS2 fully encrypted drive](docs/luks2.md) A fully encrypted hard-drive tutorial using a strong KDF and GRUB via grub-improved-luks2-git
- [restic backup](docs/restic.md) A backup software
# Server
- [server](docs/ServerSetup.md) short guide for hosting a server
- [php](docs/php.md) short guide for getting php up and running with nginx
- [ssh](docs/ssh.md) ssh configuration
- [git](docs/GIT.md) version control software
- [rainloop](docs/rainloop.md) webbased email client
- [anki sync server](docs/anki_sync_server.md) personal sync server for anki, a spaced repetition learning program
- [docker](docs/docker.md) General tips and tricks around the container manager
- [Searx](docs/Searx.md) A meta searchengine which respects privacy. Arch setup guide.
- [Nextcloud](docs/nextcloud.md) A self-hosted cloud solution. Installation (on Arch), configuration, and usage tips.
- [dnsmasq](docs/dnsmasq.md) A lightweight DNS server with DHCP and TFTP support.
=======
- [calcurse sync](docs/calDAV.md) Sync calcurse with you phone etc.
# Other
- [Chromecast with Google TV](docs/ChromecastGoogleTv.md) a neat way to disable the built-in launcher and it's baked-in ads.
# Admin

View File

@ -0,0 +1,518 @@
; Start a new pool named 'nextcloud'.
; the variable $pool can be used in any directive and will be replaced by the
; pool name ('nextcloud' here)
[nextcloud]
; Per pool prefix
; It only applies on the following directives:
; - 'access.log'
; - 'slowlog'
; - 'listen' (unixsocket)
; - 'chroot'
; - 'chdir'
; - 'php_values'
; - 'php_admin_values'
; When not set, the global prefix (or /usr) applies instead.
; Note: This directive can also be relative to the global prefix.
; Default Value: none
;prefix = /path/to/pools/$pool
; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group
; will be used.
user = nextcloud
group = nextcloud
; The address on which to accept FastCGI requests.
; Valid syntaxes are:
; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on
; a specific port;
; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
; a specific port;
; 'port' - to listen on a TCP socket to all addresses
; (IPv6 and IPv4-mapped) on a specific port;
; '/path/to/unix/socket' - to listen on a unix socket.
; Note: This value is mandatory.
listen = /run/php-fpm-legacy/nextcloud.sock
; Set listen(2) backlog.
; Default Value: 511 (-1 on FreeBSD and OpenBSD)
;listen.backlog = 511
; Set permissions for unix socket, if one is used. In Linux, read/write
; permissions must be set in order to allow connections from a web server. Many
; BSD-derived systems allow connections regardless of permissions. The owner
; and group can be specified either by name or by their numeric IDs.
; Default Values: user and group are set as the running user
; mode is set to 0660
listen.owner = nextcloud
listen.group = http
listen.mode = 0660
; When POSIX Access Control Lists are supported you can set them using
; these options, value is a comma separated list of user/group names.
; When set, listen.owner and listen.group are ignored
;listen.acl_users =
;listen.acl_groups =
; List of addresses (IPv4/IPv6) of FastCGI clients which are allowed to connect.
; Equivalent to the FCGI_WEB_SERVER_ADDRS environment variable in the original
; PHP FCGI (5.2.2+). Makes sense only with a tcp listening socket. Each address
; must be separated by a comma. If this value is left blank, connections will be
; accepted from any ip address.
; Default Value: any
listen.allowed_clients = 127.0.0.1
; Specify the nice(2) priority to apply to the pool processes (only if set)
; The value can vary from -19 (highest priority) to 20 (lower priority)
; Note: - It will only work if the FPM master process is launched as root
; - The pool processes will inherit the master process priority
; unless it specified otherwise
; Default Value: no set
; process.priority = -19
; Set the process dumpable flag (PR_SET_DUMPABLE prctl) even if the process user
; or group is different than the master process user. It allows to create process
; core dump and ptrace the process for the pool user.
; Default Value: no
; process.dumpable = yes
; Choose how the process manager will control the number of child processes.
; Possible Values:
; static - a fixed number (pm.max_children) of child processes;
; dynamic - the number of child processes are set dynamically based on the
; following directives. With this process management, there will be
; always at least 1 children.
; pm.max_children - the maximum number of children that can
; be alive at the same time.
; pm.start_servers - the number of children created on startup.
; pm.min_spare_servers - the minimum number of children in 'idle'
; state (waiting to process). If the number
; of 'idle' processes is less than this
; number then some children will be created.
; pm.max_spare_servers - the maximum number of children in 'idle'
; state (waiting to process). If the number
; of 'idle' processes is greater than this
; number then some children will be killed.
; pm.max_spawn_rate - the maximum number of rate to spawn child
; processes at once.
; ondemand - no children are created at startup. Children will be forked when
; new requests will connect. The following parameter are used:
; pm.max_children - the maximum number of children that
; can be alive at the same time.
; pm.process_idle_timeout - The number of seconds after which
; an idle process will be killed.
; Note: This value is mandatory.
pm = dynamic
; The number of child processes to be created when pm is set to 'static' and the
; maximum number of child processes when pm is set to 'dynamic' or 'ondemand'.
; This value sets the limit on the number of simultaneous requests that will be
; served. Equivalent to the ApacheMaxClients directive with mpm_prefork.
; Equivalent to the PHP_FCGI_CHILDREN environment variable in the original PHP
; CGI. The below defaults are based on a server without much resources. Don't
; forget to tweak pm.* to fit your needs.
; Note: Used when pm is set to 'static', 'dynamic' or 'ondemand'
; Note: This value is mandatory.
pm.max_children = 5
; The number of child processes created on startup.
; Note: Used only when pm is set to 'dynamic'
; Default Value: (min_spare_servers + max_spare_servers) / 2
pm.start_servers = 2
; The desired minimum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.min_spare_servers = 1
; The desired maximum number of idle server processes.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
pm.max_spare_servers = 3
; The number of rate to spawn child processes at once.
; Note: Used only when pm is set to 'dynamic'
; Note: Mandatory when pm is set to 'dynamic'
; Default Value: 32
;pm.max_spawn_rate = 32
; The number of seconds after which an idle process will be killed.
; Note: Used only when pm is set to 'ondemand'
; Default Value: 10s
;pm.process_idle_timeout = 10s;
; The number of requests each child process should execute before respawning.
; This can be useful to work around memory leaks in 3rd party libraries. For
; endless request processing specify '0'. Equivalent to PHP_FCGI_MAX_REQUESTS.
; Default Value: 0
;pm.max_requests = 500
; The URI to view the FPM status page. If this value is not set, no URI will be
; recognized as a status page. It shows the following information:
; pool - the name of the pool;
; process manager - static, dynamic or ondemand;
; start time - the date and time FPM has started;
; start since - number of seconds since FPM has started;
; accepted conn - the number of request accepted by the pool;
; listen queue - the number of request in the queue of pending
; connections (see backlog in listen(2));
; max listen queue - the maximum number of requests in the queue
; of pending connections since FPM has started;
; listen queue len - the size of the socket queue of pending connections;
; idle processes - the number of idle processes;
; active processes - the number of active processes;
; total processes - the number of idle + active processes;
; max active processes - the maximum number of active processes since FPM
; has started;
; max children reached - number of times, the process limit has been reached,
; when pm tries to start more children (works only for
; pm 'dynamic' and 'ondemand');
; Value are updated in real time.
; Example output:
; pool: www
; process manager: static
; start time: 01/Jul/2011:17:53:49 +0200
; start since: 62636
; accepted conn: 190460
; listen queue: 0
; max listen queue: 1
; listen queue len: 42
; idle processes: 4
; active processes: 11
; total processes: 15
; max active processes: 12
; max children reached: 0
;
; By default the status page output is formatted as text/plain. Passing either
; 'html', 'xml' or 'json' in the query string will return the corresponding
; output syntax. Example:
; http://www.foo.bar/status
; http://www.foo.bar/status?json
; http://www.foo.bar/status?html
; http://www.foo.bar/status?xml
;
; By default the status page only outputs short status. Passing 'full' in the
; query string will also return status for each pool process.
; Example:
; http://www.foo.bar/status?full
; http://www.foo.bar/status?json&full
; http://www.foo.bar/status?html&full
; http://www.foo.bar/status?xml&full
; The Full status returns for each process:
; pid - the PID of the process;
; state - the state of the process (Idle, Running, ...);
; start time - the date and time the process has started;
; start since - the number of seconds since the process has started;
; requests - the number of requests the process has served;
; request duration - the duration in µs of the requests;
; request method - the request method (GET, POST, ...);
; request URI - the request URI with the query string;
; content length - the content length of the request (only with POST);
; user - the user (PHP_AUTH_USER) (or '-' if not set);
; script - the main script called (or '-' if not set);
; last request cpu - the %cpu the last request consumed
; it's always 0 if the process is not in Idle state
; because CPU calculation is done when the request
; processing has terminated;
; last request memory - the max amount of memory the last request consumed
; it's always 0 if the process is not in Idle state
; because memory calculation is done when the request
; processing has terminated;
; If the process is in Idle state, then informations are related to the
; last request the process has served. Otherwise informations are related to
; the current request being served.
; Example output:
; ************************
; pid: 31330
; state: Running
; start time: 01/Jul/2011:17:53:49 +0200
; start since: 63087
; requests: 12808
; request duration: 1250261
; request method: GET
; request URI: /test_mem.php?N=10000
; content length: 0
; user: -
; script: /home/fat/web/docs/php/test_mem.php
; last request cpu: 0.00
; last request memory: 0
;
; Note: There is a real-time FPM status monitoring sample web page available
; It's available in: /usr/share/php-legacy/fpm/status.html
;
; Note: The value must start with a leading slash (/). The value can be
; anything, but it may not be a good idea to use the .php extension or it
; may conflict with a real PHP file.
; Default Value: not set
;pm.status_path = /status
; The address on which to accept FastCGI status request. This creates a new
; invisible pool that can handle requests independently. This is useful
; if the main pool is busy with long running requests because it is still possible
; to get the status before finishing the long running requests.
;
; Valid syntaxes are:
; 'ip.add.re.ss:port' - to listen on a TCP socket to a specific IPv4 address on
; a specific port;
; '[ip:6:addr:ess]:port' - to listen on a TCP socket to a specific IPv6 address on
; a specific port;
; 'port' - to listen on a TCP socket to all addresses
; (IPv6 and IPv4-mapped) on a specific port;
; '/path/to/unix/socket' - to listen on a unix socket.
; Default Value: value of the listen option
;pm.status_listen = 127.0.0.1:9001
; The ping URI to call the monitoring page of FPM. If this value is not set, no
; URI will be recognized as a ping page. This could be used to test from outside
; that FPM is alive and responding, or to
; - create a graph of FPM availability (rrd or such);
; - remove a server from a group if it is not responding (load balancing);
; - trigger alerts for the operating team (24/7).
; Note: The value must start with a leading slash (/). The value can be
; anything, but it may not be a good idea to use the .php extension or it
; may conflict with a real PHP file.
; Default Value: not set
;ping.path = /ping
; This directive may be used to customize the response of a ping request. The
; response is formatted as text/plain with a 200 response code.
; Default Value: pong
;ping.response = pong
; The access log file
; Default: not set
;access.log = log/$pool.access.log
access.log = /var/log/php-fpm-legacy/access/$pool.log
; The access log format.
; The following syntax is allowed
; %%: the '%' character
; %C: %CPU used by the request
; it can accept the following format:
; - %{user}C for user CPU only
; - %{system}C for system CPU only
; - %{total}C for user + system CPU (default)
; %d: time taken to serve the request
; it can accept the following format:
; - %{seconds}d (default)
; - %{milliseconds}d
; - %{milli}d
; - %{microseconds}d
; - %{micro}d
; %e: an environment variable (same as $_ENV or $_SERVER)
; it must be associated with embraces to specify the name of the env
; variable. Some examples:
; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e
; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e
; %f: script filename
; %l: content-length of the request (for POST request only)
; %m: request method
; %M: peak of memory allocated by PHP
; it can accept the following format:
; - %{bytes}M (default)
; - %{kilobytes}M
; - %{kilo}M
; - %{megabytes}M
; - %{mega}M
; %n: pool name
; %o: output header
; it must be associated with embraces to specify the name of the header:
; - %{Content-Type}o
; - %{X-Powered-By}o
; - %{Transfert-Encoding}o
; - ....
; %p: PID of the child that serviced the request
; %P: PID of the parent of the child that serviced the request
; %q: the query string
; %Q: the '?' character if query string exists
; %r: the request URI (without the query string, see %q and %Q)
; %R: remote IP address
; %s: status (response code)
; %t: server time the request was received
; it can accept a strftime(3) format:
; %d/%b/%Y:%H:%M:%S %z (default)
; The strftime(3) format must be encapsulated in a %{<strftime_format>}t tag
; e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t
; %T: time the log has been written (the request has finished)
; it can accept a strftime(3) format:
; %d/%b/%Y:%H:%M:%S %z (default)
; The strftime(3) format must be encapsulated in a %{<strftime_format>}t tag
; e.g. for a ISO8601 formatted timestring, use: %{%Y-%m-%dT%H:%M:%S%z}t
; %u: remote user
;
; Default: "%R - %u %t \"%m %r\" %s"
;access.format = "%R - %u %t \"%m %r%Q%q\" %s %f %{milli}d %{kilo}M %C%%"
access.format = "%{%Y-%m-%dT%H:%M:%S%z}t %R: \"%m %r%Q%q\" %s %f %{milli}d %{kilo}M %C%%"
; The log file for slow requests
; Default Value: not set
; Note: slowlog is mandatory if request_slowlog_timeout is set
;slowlog = log/$pool.log.slow
; The timeout for serving a single request after which a PHP backtrace will be
; dumped to the 'slowlog' file. A value of '0s' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_slowlog_timeout = 0
; Depth of slow log stack trace.
; Default Value: 20
;request_slowlog_trace_depth = 20
; The timeout for serving a single request after which the worker process will
; be killed. This option should be used when the 'max_execution_time' ini option
; does not stop script execution for some reason. A value of '0' means 'off'.
; Available units: s(econds)(default), m(inutes), h(ours), or d(ays)
; Default Value: 0
;request_terminate_timeout = 0
; The timeout set by 'request_terminate_timeout' ini option is not engaged after
; application calls 'fastcgi_finish_request' or when application has finished and
; shutdown functions are being called (registered via register_shutdown_function).
; This option will enable timeout limit to be applied unconditionally
; even in such cases.
; Default Value: no
;request_terminate_timeout_track_finished = no
; Set open file descriptor rlimit.
; Default Value: system defined value
;rlimit_files = 1024
; Set max core size rlimit.
; Possible Values: 'unlimited' or an integer greater or equal to 0
; Default Value: system defined value
;rlimit_core = 0
; Chroot to this directory at the start. This value must be defined as an
; absolute path. When this value is not set, chroot is not used.
; Note: you can prefix with '$prefix' to chroot to the pool prefix or one
; of its subdirectories. If the pool prefix is not set, the global prefix
; will be used instead.
; Note: chrooting is a great security feature and should be used whenever
; possible. However, all PHP paths will be relative to the chroot
; (error_log, sessions.save_path, ...).
; Default Value: not set
;chroot =
; Chdir to this directory at the start.
; Note: relative path can be used.
; Default Value: current directory or / when chroot
;chdir = /srv/http
chdir = /usr/share/webapps/$pool
; Redirect worker stdout and stderr into main error log. If not set, stdout and
; stderr will be redirected to /dev/null according to FastCGI specs.
; Note: on highloaded environment, this can cause some delay in the page
; process time (several ms).
; Default Value: no
;catch_workers_output = yes
; Decorate worker output with prefix and suffix containing information about
; the child that writes to the log and if stdout or stderr is used as well as
; log level and time. This options is used only if catch_workers_output is yes.
; Settings to "no" will output data as written to the stdout or stderr.
; Default value: yes
;decorate_workers_output = no
; Clear environment in FPM workers
; Prevents arbitrary environment variables from reaching FPM worker processes
; by clearing the environment in workers before env vars specified in this
; pool configuration are added.
; Setting to "no" will make all environment variables available to PHP code
; via getenv(), $_ENV and $_SERVER.
; Default Value: yes
;clear_env = no
; Limits the extensions of the main script FPM will allow to parse. This can
; prevent configuration mistakes on the web server side. You should only limit
; FPM to .php extensions to prevent malicious users to use other extensions to
; execute php code.
; Note: set an empty value to allow all extensions.
; Default Value: .php
;security.limit_extensions = .php .php3 .php4 .php5 .php7
; Pass environment variables like LD_LIBRARY_PATH. All $VARIABLEs are taken from
; the current environment.
; Default Value: clean env
env[HOSTNAME] = $HOSTNAME
env[PATH] = /usr/local/bin:/usr/bin
env[TMP] = /tmp
env[TMPDIR] = /tmp
env[TEMP] = /tmp
; Additional php.ini defines, specific to this pool of workers. These settings
; overwrite the values previously defined in the php.ini. The directives are the
; same as the PHP SAPI:
; php_value/php_flag - you can set classic ini defines which can
; be overwritten from PHP call 'ini_set'.
; php_admin_value/php_admin_flag - these directives won't be overwritten by
; PHP call 'ini_set'
; For php_*flag, valid values are on, off, 1, 0, true, false, yes or no.
; Defining 'extension' will load the corresponding shared extension from
; extension_dir. Defining 'disable_functions' or 'disable_classes' will not
; overwrite previously defined php.ini values, but will append the new value
; instead.
; Note: path INI options can be relative and will be expanded with the prefix
; (pool, global or /usr)
; Default Value: nothing is defined by default except the values in php.ini and
; specified at startup with the -d argument
;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f www@my.domain.com
;php_flag[display_errors] = off
;php_admin_value[error_log] = /var/log/fpm-$pool-error.log
;php_admin_flag[log_errors] = on
;php_admin_value[memory_limit] = 32M
php_value[date.timezone] = Europe/Zurich
php_value[open_basedir] = /var/lib/$pool:/tmp:/usr/share/webapps/$pool:/etc/webapps/$pool:/dev/urandom:/usr/lib/php-legacy/modules:/var/log/$pool:/proc/meminfo:/proc/cpuinfo
; put session data in dedicated directory
php_value[session.save_path] = /var/lib/$pool/sessions
php_value[session.gc_maxlifetime] = 21600
php_value[session.gc_divisor] = 500
php_value[session.gc_probability] = 1
php_flag[expose_php] = false
php_value[post_max_size] = 1000M
php_value[upload_max_filesize] = 1000M
; as recommended in admin manual (avoids related warning in admin GUI later)
php_flag[output_buffering] = off
php_value[max_input_time] = 120
php_value[max_execution_time] = 60
php_value[memory_limit] = 512M
; opcache settings must be defined in php-fpm.ini. otherwise (i.e. when defined here)
; this causes segmentation faults in php-fpm worker processes
; uncomment if php-apcu is installed and used
; php_value[extension] = apcu
php_admin_value[apc.ttl] = 7200
php_value[extension] = bcmath
php_value[extension] = bz2
php_value[extension] = exif
php_value[extension] = gd
php_value[extension] = gmp
php_value[extension] = iconv
; uncomment if php-imagick is installed and used
php_value[extension] = imagick
php_value[extension] = intl
; uncomment if php-memcached is installed and used
; php_value[extension] = memcached
; uncomment exactly one of the pdo extensions depending on what database is used
; php_value[extension] = pdo_mysql
php_value[extension] = pdo_pgsql
; php_value[extension] = pdo_sqlite
; uncomment if php-igbinary is installed and used (e.g. required by redis)
; php_value[extension] = igbinary
; uncomment if php-redis is installed and used (requires php-igbinary)
; php_value[extension] = redis
; sysvsem required since nextcloud 26
php_value[extension] = sysvsem
; uncomment if php-xsl is installed and used
; php_value[extension] = xsl

View File

@ -0,0 +1,195 @@
upstream php-handler {
#server 127.0.0.1:9000;
server unix:/run/php-fpm-legacy/nextcloud.sock;
}
# Set the `immutable` cache control options only for assets with a cache busting `v` argument
map $arg_v $asset_immutable {
"" "";
default "immutable";
}
server {
listen 80;
listen [::]:80;
server_name cloud.example.com;
# Prevent nginx HTTP Server Detection
server_tokens off;
# Enforce HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name cloud.example.com;
# Path to the root of your installation
root /usr/share/webapps/nextcloud;
# Use Mozilla's guidelines for SSL/TLS settings
# https://mozilla.github.io/server-side-tls/ssl-config-generator/
ssl_certificate /etc/letsencrypt/live/cloud.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/cloud.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Prevent nginx HTTP Server Detection
server_tokens off;
# HSTS settings
# WARNING: Only add the preload option once you read about
# the consequences in https://hstspreload.org/. This option
# will add the domain to a hardcoded list that is shipped
# in all major browsers and getting removed from this list
# could take several months.
#add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload" always;
# set max upload size and increase upload timeout:
client_max_body_size 512M;
client_body_timeout 300s;
fastcgi_buffers 64 4K;
# Enable gzip but do not remove ETag headers
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml text/javascript application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/wasm application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
# Pagespeed is not supported by Nextcloud, so if your server is built
# with the `ngx_pagespeed` module, uncomment this line to disable it.
#pagespeed off;
# The settings allows you to optimize the HTTP2 bandwidth.
# See https://blog.cloudflare.com/delivering-http-2-upload-speed-improvements/
# for tuning hints
client_body_buffer_size 512k;
# HTTP response headers borrowed from Nextcloud `.htaccess`
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "noindex, nofollow" always;
add_header X-XSS-Protection "1; mode=block" always;
# Remove X-Powered-By, which is an information leak
fastcgi_hide_header X-Powered-By;
# Add .mjs as a file extension for javascript
# Either include it in the default mime.types list
# or include you can include that list explicitly and add the file extension
# only for Nextcloud like below:
include mime.types;
types {
text/javascript js mjs;
}
# Specify how to handle directories -- specifying `/index.php$request_uri`
# here as the fallback means that Nginx always exhibits the desired behaviour
# when a client requests a path that corresponds to a directory that exists
# on the server. In particular, if that directory contains an index.php file,
# that file is correctly served; if it doesn't, then the request is passed to
# the front-end controller. This consistent behaviour means that we don't need
# to specify custom rules for certain paths (e.g. images and other assets,
# `/updater`, `/ocs-provider`), and thus
# `try_files $uri $uri/ /index.php$request_uri`
# always provides the desired behaviour.
index index.php index.html /index.php$request_uri;
# Rule borrowed from `.htaccess` to handle Microsoft DAV clients
location = / {
if ( $http_user_agent ~ ^DavClnt ) {
return 302 /remote.php/webdav/$is_args$args;
}
}
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# Make a regex exception for `/.well-known` so that clients can still
# access it despite the existence of the regex rule
# `location ~ /(\.|autotest|...)` which would otherwise handle requests
# for `/.well-known`.
location ^~ /.well-known {
# The rules in this block are an adaptation of the rules
# in `.htaccess` that concern `/.well-known`.
location = /.well-known/carddav { return 301 /remote.php/dav/; }
location = /.well-known/caldav { return 301 /remote.php/dav/; }
location /.well-known/acme-challenge { try_files $uri $uri/ =404; }
location /.well-known/pki-validation { try_files $uri $uri/ =404; }
# Let Nextcloud's API for `/.well-known` URIs handle all other
# requests by passing them to the front-end controller.
return 301 /index.php$request_uri;
}
# Rules borrowed from `.htaccess` to hide certain paths from clients
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)(?:$|/) { return 404; }
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) { return 404; }
# Ensure this block, which passes PHP files to the PHP process, is above the blocks
# which handle static assets (as seen below). If this block is not declared first,
# then Nginx will encounter an infinite rewriting loop when it prepends `/index.php`
# to the URI, resulting in a HTTP 500 error response.
location ~ \.php(?:$|/) {
# Required for legacy support
rewrite ^/(?!index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|ocs-provider\/.+|.+\/richdocumentscode\/proxy) /index.php$request_uri;
fastcgi_split_path_info ^(.+?\.php)(/.*)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
fastcgi_param modHeadersAvailable true; # Avoid sending the security headers twice
fastcgi_param front_controller_active true; # Enable pretty urls
fastcgi_pass php-handler;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
fastcgi_max_temp_file_size 0;
}
# Serve static files
location ~ \.(?:css|js|mjs|svg|gif|png|jpg|ico|wasm|tflite|map|ogg|flac)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463, $asset_immutable";
access_log off; # Optional: Don't log access to assets
location ~ \.wasm$ {
default_type application/wasm;
}
}
location ~ \.woff2?$ {
try_files $uri /index.php$request_uri;
expires 7d; # Cache-Control policy borrowed from `.htaccess`
access_log off; # Optional: Don't log access to assets
}
# Rule borrowed from `.htaccess`
location /remote {
return 301 /remote.php$request_uri;
}
location / {
try_files $uri $uri/ /index.php$request_uri;
}
}