Last modification on
This was a christmas hack for fun and non-profit. I wanted to write a chess puzzle book generator. Inspired by 1001 Deadly Checkmates by John Nunn, ISBN-13: 978-1906454258 and similar puzzle books.
Terminal version:
curl -s 'https://codemadness.org/downloads/puzzles/index.vt' | less -R
I may or may not periodially update this page :)
Time flies (since Christmas), here is a valentine edition with attraction puzzles (not only checkmates) using the red "love" theme. It is optimized for his and her pleasure:
https://codemadness.org/downloads/puzzles-valentine/
git clone git://git.codemadness.org/chess-puzzles
You can browse the source-code at:
The generate.sh shellscript generates the output and files for the puzzles.
The puzzles used are from the lichess.org puzzle database: https://database.lichess.org/#puzzles
This database is a big CSV file containing the initial board state in the Forsyth-Edwards Notation (FEN) format and the moves in Universal Chess Interface (UCI) format. Each line contains the board state and the initial and solution moves.
The generated index page is a HTML page, it lists the puzzles. Each puzzle on this page is an SVG image. This scalable image format looks good in all resolutions.
Lichess is an open-source and gratis website to play on-line chess. There are no paid levels to unlock features. All the software hosting Lichess is open-source and anyone can register and play chess on it for free. Most of the data about the games played is also open.
However, the website depends on your donations or contributions. If you can, please do so.
Reads puzzles from the database and shuffle them. Do some rough sorting and categorization based on difficulty and assign score points.
The random shuffling is done using a hard-coded random seed. This means on the same machine with the same puzzle database it will regenerate the same sequence of random puzzles in a deterministic manner.
It outputs HTML, with support for CSS dark mode and does not use Javascript. It includes a plain-text listing of the solutions in PGN notation for the puzzles. It also outputs .vt files suitable for the terminal. It uses unicode symbols for the chess pieces and RGB color sequence for the board theme
This is a program written in C to read and parse the board state in FEN format and read the UCI moves. It can output to various formats.
See the man page for detailed usage information.
fen.c supports the following output formats:
fen.c can also run in CGI mode. This can be used on a HTTP server:
Terminal output:
curl -s 'https://codemadness.org/onlyfens?moves=e2e4%20e7e5&output=tty'
For pgn and "speak mode" it has an option to output Dutch notated PGN or speech too.
For example:
There is an included example script that can stream Lichess games to the terminal. It uses the Lichess API. It will display the board using terminal escape codes. The games are automatically annotated with PGN notation and with text how a human would say the notation. This can also be piped to a speech synthesizer like espeak as audio.
pgn-extract is a useful tool to convert Portable Game Notation (PGN) to Universal Chess Interface (UCI) moves (or do many other useful chess related things!).
Theres also an example script included that can generate an animated gif from PGN using ffmpeg.
It creates an optimal color palette from the input images and generates an optimized animated gif. The last move (typically some checkmate) is displayed slightly longer.
chess-puzzles source-code:
https://www.codemadness.org/git/chess-puzzles/file/README.html
Lichess FEN puzzle database:
https://database.lichess.org/#puzzles
lichess.org:
https://lichess.org/
SVG of the individual pieces used in fen.c:
https://github.com/lichess-org/lila/tree/master/public/piece/cburnett
pgn-extract:
A great multi-purpose PGN manipulation program with many options:
https://www.cs.kent.ac.uk/people/staff/djb/pgn-extract/
An example to convert PGN games to UCI moves:
pgn-extract --notags -Wuc
Lichess API:
https://lichess.org/api
Stockfish:
Strong open-source chess engine and analysis tool:
https://stockfishchess.org/
Last modification on
This describes a simple shellscript programming pattern to process a list of jobs in parallel. This script example is contained in one file.
#!/bin/sh
maxjobs=4
# fake program for example purposes.
someprogram() {
echo "Yep yep, I'm totally a real program!"
sleep "$1"
}
# run(arg1, arg2)
run() {
echo "[$1] $2 started" >&2
someprogram "$1" >/dev/null
status="$?"
echo "[$1] $2 done" >&2
return "$status"
}
# process the jobs.
j=1
for f in 1 2 3 4 5 6 7 8 9 10; do
run "$f" "something" &
jm=$((j % maxjobs)) # shell arithmetic: modulo
test "$jm" = "0" && wait
j=$((j+1))
done
wait
This is less optimal because it waits until all jobs in the same batch are finished (each batch contain $maxjobs items).
For example with 2 items per batch and 4 total jobs it could be:
This could be optimized to:
It also does not handle signals such as SIGINT (^C). However the xargs example below does:
#!/bin/sh
maxjobs=4
# fake program for example purposes.
someprogram() {
echo "Yep yep, I'm totally a real program!"
sleep "$1"
}
# run(arg1, arg2)
run() {
echo "[$1] $2 started" >&2
someprogram "$1" >/dev/null
status="$?"
echo "[$1] $2 done" >&2
return "$status"
}
# child process job.
if test "$CHILD_MODE" = "1"; then
run "$1" "$2"
exit "$?"
fi
# generate a list of jobs for processing.
list() {
for f in 1 2 3 4 5 6 7 8 9 10; do
printf '%s\0%s\0' "$f" "something"
done
}
# process jobs in parallel.
list | CHILD_MODE="1" xargs -r -0 -P "${maxjobs}" -L 2 "$(readlink -f "$0")"
Although the above example is kindof stupid, it already shows the queueing of jobs is more efficient.
Script 1:
time ./script1.sh
[...snip snip...]
real 0m22.095s
Script 2:
time ./script2.sh
[...snip snip...]
real 0m18.120s
The parent process:
The child process:
The command-line arguments are passed by the parent using xargs.
The environment variable $CHILD_MODE is set to indicate to the script itself it is run as a child process of the script.
The script itself (ran in child-mode process) only executes the task and signals its status back to xargs and the parent.
The exit status of the child program is signaled to xargs. This could be handled, for example to stop on the first failure (in this example it is not). For example if the program is killed, stopped or the exit status is 255 then xargs stops running also.
From the OpenBSD man page: https://man.openbsd.org/xargs
xargs - construct argument list(s) and execute utility
Options explained:
Some of the options, like -P are as of writing (2023) non-POSIX: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/xargs.html. However many systems support this useful extension for many years now.
The specification even mentions implementations which support parallel operations:
"The version of xargs required by this volume of POSIX.1-2017 is required to wait for the completion of the invoked command before invoking another command. This was done because historical scripts using xargs assumed sequential execution. Implementations wanting to provide parallel operation of the invoked utilities are encouraged to add an option enabling parallel invocation, but should still wait for termination of all of the children before xargs terminates normally."
Some historic context:
The xargs -0 option was added on 1996-06-11 by Theo de Raadt, about a year after the NetBSD import (over 27 years ago at the time of writing):
On OpenBSD the xargs -P option was added on 2003-12-06 by syncing the FreeBSD code:
Looking at the imported git history log of GNU findutils (which has xargs), the very first commit already had the -0 and -P option:
commit c030b5ee33bbec3c93cddc3ca9ebec14c24dbe07
Author: Kevin Dalley <kevin@seti.org>
Date: Sun Feb 4 20:35:16 1996 +0000
Initial revision
Depending on what you want to do a workaround could be to use the -0 option with a single field and use the -n flag. Then in each child program invocation split the field by a separator.
Last modification on
... improved at least for my preferences ;)
It scrapes the channel data from Youtube and combines it with the parsed Atom feed from the channel on Youtube.
The Atom parser is based on sfeed, with some of the code removed because it is not needed by this program. It scrapes the metadata of the videos from the channel its HTML page and uses my custom JSON parser to convert the Javascript/JSON structure.
This parser is also used by the json2tsv tool. It has few dependencies.
There is an option to run directly from the command-line or in CGI-mode. When the environment variable $REQUEST_URI is set then it is automatically run in CGI mode.
Command-line usage:
youtube_feed channelid atom
youtube_feed channelid gph
youtube_feed channelid html
youtube_feed channelid json
youtube_feed channelid tsv
youtube_feed channelid txt
CGI program usage:
The last basename part of the URL should be the channelid + the output format extension. It defaults to TSV if there is no extension. The CGI program can be used with a HTTPd or a Gopher daemon such as geomyidae.
For example:
Atom XML: https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.xml
HTML: https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.html
JSON: https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.json
TSV: https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.tsv
twtxt: https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.txt
TSV, default: https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw
Gopher dir: gopher://codemadness.org/1/feed.cgi/UCrbvoMC0zUvPL8vjswhLOSw.gph
Gopher TSV: gopher://codemadness.org/0/feed.cgi/UCrbvoMC0zUvPL8vjswhLOSw
An OpenBSD httpd.conf using slowcgi as an example:
server "codemadness.org" {
location "/yt-chan/*" {
request strip 1
root "/cgi-bin/yt-chan"
fastcgi socket "/run/slowcgi.sock"
}
}
sfeedrc example of an existing Youtube RSS/Atom feed:
# list of feeds to fetch:
feeds() {
# feed <name> <feedurl> [basesiteurl] [encoding]
# normal Youtube Atom feed.
feed "yt IM" "https://www.youtube.com/feeds/videos.xml?channel_id=UCrbvoMC0zUvPL8vjswhLOSw"
}
Use the new Atom feed directly using the CGI-mode and Atom output format:
# list of feeds to fetch:
feeds() {
# feed <name> <feedurl> [basesiteurl] [encoding]
# new Youtube Atom feed.
feed "idiotbox IM" "https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.xml"
}
... or convert directly using a custom connector program on the local system via the command-line:
# fetch(name, url, feedfile)
fetch() {
case "$1" in
"connector example")
youtube_feed "$2";;
*)
curl -L --max-redirs 0 -H "User-Agent:" -f -s -m 15 \
"$2" 2>/dev/null;;
esac
}
# parse and convert input, by default XML to the sfeed(5) TSV format.
# parse(name, feedurl, basesiteurl)
parse() {
case "$1" in
"connector example")
cat;;
*)
sfeed "$3";;
esac
}
# list of feeds to fetch:
feeds() {
# feed <name> <feedurl> [basesiteurl] [encoding]
feed "connector example" "UCrbvoMC0zUvPL8vjswhLOSw"
}
git clone git://git.codemadness.org/frontends
You can browse the source-code at:
The program is: youtube/feed
$ make
# make install
I hope by sharing this it is useful to someone other than me as well.
]]>Last modification on
This script is tested on OpenBSD using OpenBSD smtpd and OpenBSD httpd and the gopher daemon geomyidae.
On OpenBSD:
pkg_add mblaze
In your mail aliases (for example /etc/mail/aliases) put:
paste: |/usr/local/bin/paste-mail
This pipes the mail to the script paste-mail for processing, this script is described below. Copy the below contents in /usr/local/bin/paste-mail
Script:
#!/bin/sh
d="/home/www/domains/www.codemadness.org/htdocs/mailpaste"
tmpmsg=$(mktemp)
tmpmail=$(mktemp)
cleanup() {
rm -f "$tmpmail" "$tmpmsg"
}
# store whole mail from stdin temporarily, on exit remove temporary file.
trap "cleanup" EXIT
cat > "$tmpmail"
# mblaze: don't store mail sequence.
MAILSEQ=/dev/null
export MAILSEQ
# get from address (without display name).
from=$(maddr -a -h 'From' /dev/stdin < "$tmpmail")
# check if allowed or not.
case "$from" in
"hiltjo@codemadness.org")
;;
*)
exit 0;;
esac
# prevent mail loop.
if printf '%s' "$from" | grep -q "paste@"; then
exit 0
fi
echo "Thank you for using the enterprise paste service." > "$tmpmsg"
echo "" >> "$tmpmsg"
echo "Your file(s) are available at:" >> "$tmpmsg"
echo "" >> "$tmpmsg"
# process each attachment.
mshow -n -q -t /dev/stdin < "$tmpmail" | sed -nE 's@.*name="(.*)".*@\1@p' | while read -r name; do
test "$name" = "" && continue
# extract attachment.
tmpfile=$(mktemp -p "$d" XXXXXXXXXXXX)
mshow -n -O /dev/stdin "$name" < "$tmpmail" > "$tmpfile"
# use file extension.
ext="${name##*/}"
case "$ext" in
*.tar.*)
# special case: support .tar.gz, tar.bz2, etc.
ext="tar.${ext##*.}";;
*.*)
ext="${ext##*.}";;
*)
ext="";;
esac
ext="${ext%%*.}"
# use file extension if it is set.
outputfile="$tmpfile"
if test "$ext" != ""; then
outputfile="$tmpfile.$ext"
fi
mv "$tmpfile" "$outputfile"
b=$(basename "$outputfile")
chmod 666 "$outputfile"
url="gopher://codemadness.org/9/mailpaste/$b"
echo "$name:" >> "$tmpmsg"
echo " Text file: gopher://codemadness.org/0/mailpaste/$b" >> "$tmpmsg"
echo " Image file: gopher://codemadness.org/I/mailpaste/$b" >> "$tmpmsg"
echo " Binary file: gopher://codemadness.org/9/mailpaste/$b" >> "$tmpmsg"
echo "" >> "$tmpmsg"
done
echo "" >> "$tmpmsg"
echo "Sincerely," >> "$tmpmsg"
echo "Your friendly paste_bot" >> "$tmpmsg"
# mail back the user.
mail -r "$from" -s "Your files" "$from" < "$tmpmsg"
cleanup
The mail daemon processing the mail needs of course to be able to have permissions to write to the specified directory. The user who received the mail needs to be able to read it from a location they can access and have permissions for it also.
This is just an example script. There is room for many improvements. Feel free to change it in any way you like.
I hope this enterprise(tm) mail service is inspirational or something ;)
]]>Last modification on
This article describes a TODO application or workflow.
It works like this:
The text format I use is this:
[indendations]<symbol><SPACE><item text><NEWLINE>
Most of the time an item is just one line. This format is just a general guideline to keep the items somewhat structured.
Items are prefixed with a symbol.
I use an indendation with a TAB before an item to indicate item dependencies. The items can be nested.
For the prioritization I put the most important items and sections from the top to the bottom. These can be reshuffled as you wish of course.
To delete an item you remove the line. To archive an item you keep the line.
A section is a line which has no symbol. This is like a header to group items.
Checklist for releasing project 0.1:
- Test project with different compilers and check for warnings.
- Documentation:
- Proofread and make sure it matches all program behaviour.
- Run mandoc -Tlint on the man pages.
? Copy useful examples from the README file to the man page?
- Run testsuite and check for failures before release.
project 0.2:
? Investigate if feature mentioned by some user is worth adding.
ssh -t host 'ed TODO'
ssh host
tmux or tmux a
ed TODO
ssh host
tmux or tmux a
ed TODO
git add TODO
git commit -m 'TODO: update'
I hope this is inspirational or something,
]]>Last modification on
This describes how to use 2FA without using crappy authenticator "apps" or a mobile device.
On OpenBSD:
pkg_add oath-toolkit zbar
On Void Linux:
xbps-install oath-toolkit zbar
There is probably a package for your operating system.
Save the QR code image from the authenticator app, website to an image file. Scan the QR code text from the image:
zbarimg image.png
An example QR code:
The output is typically something like:
QR-Code:otpauth://totp/Example:someuser@codemadness.org?secret=SECRETKEY&issuer=Codemadness
You only need to scan this QR-code for the secret key once. Make sure to store the secret key in a private safe place and don't show it to anyone else.
Using the secret key the following command outputs a 6-digit code by default. In this example we also assume the key is base32-encoded. There can be other parameters and options, this is documented in the Yubico URI string format reference below.
Command:
oathtool --totp -b SOMEKEY
Tip: you can create a script that automatically puts the digits in the clipboard, for example:
oathtool --totp -b SOMEKEY | xclip
Last modification on
This describes how to setup an OpenBSD RISCV64 VM in QEMU.
The shellscript below does the following:
The script is tested on the host GNU/Void Linux and OpenBSD-current.
IMPORTANT!: The signature and checksum for the miniroot, u-boot and opensbi files are not verified. If the host is OpenBSD make sure to instead install the packages (pkg_add u-boot-riscv64 opensbi) and adjust the firmware path for the qemu -bios and -kernel options.
#!/bin/sh
# mirror list: https://www.openbsd.org/ftp.html
mirror="https://ftp.bit.nl/pub/OpenBSD/"
release="7.0"
minirootname="miniroot70.img"
miniroot() {
test -f "${minirootname}" && return # download once
url="${mirror}/${release}/riscv64/${minirootname}"
curl -o "${minirootname}" "${url}"
}
createrootdisk() {
test -f disk.raw && return # create once
qemu-img create disk.raw 10G # create 10 GB disk
dd conv=notrunc if=${minirootname} of=disk.raw # write miniroot to disk
}
opensbi() {
f="opensbi.tgz"
test -f "${f}" && return # download and extract once.
url="${mirror}/${release}/packages/amd64/opensbi-0.9p0.tgz"
curl -o "${f}" "${url}"
tar -xzf "${f}" share/opensbi/generic/fw_jump.bin
}
uboot() {
f="uboot.tgz"
test -f "${f}" && return # download and extract once.
url="${mirror}/${release}/packages/amd64/u-boot-riscv64-2021.07p0.tgz"
curl -o "${f}" "${url}"
tar -xzf "${f}" share/u-boot/qemu-riscv64_smode/u-boot.bin
}
setup() {
miniroot
createrootdisk
opensbi
uboot
}
run() {
qemu-system-riscv64 \
-machine virt \
-nographic \
-m 2048M \
-smp 2 \
-bios share/opensbi/generic/fw_jump.bin \
-kernel share/u-boot/qemu-riscv64_smode/u-boot.bin \
-drive file=disk.raw,format=raw,id=hd0 -device virtio-blk-device,drive=hd0 \
-netdev user,id=net0,ipv6=off -device virtio-net-device,netdev=net0
}
setup
run
]]>Last modification on
sfeed_curses is a curses UI front-end for sfeed. It is now part of sfeed.
It shows the TAB-separated feed items in a graphical command-line UI. The interface has a look inspired by the mutt mail client. It has a sidebar panel for the feeds, a panel with a listing of the items and a small statusbar for the selected item/URL. Some functions like searching and scrolling are integrated in the interface itself.
Like the format programs included in sfeed you can run it by giving the feed files as arguments like this:
sfeed_curses ~/.sfeed/feeds/*
... or by reading directly from stdin:
sfeed_curses < ~/.sfeed/feeds/xkcd
It will show a sidebar if one or more files are specified as parameters. It will not show the sidebar by default when reading from stdin.
On pressing the 'o' or ENTER keybind it will open the link URL of an item with the plumb program. On pressing the 'a', 'e' or '@' keybind it will open the enclosure URL if there is one. The default plumb program is set to xdg-open, but can be modified by setting the environment variable $SFEED_PLUMBER. The plumb program receives the URL as a command-line argument.
The TAB-Separated-Value line of the current selected item in the feed file can be piped to a program by pressing the 'c', 'p' or '|' keybind. This allows much flexibility to make a content formatter or write other custom actions or views. This line is in the exact same format as described in the sfeed(5) man page.
The pipe program can be changed by setting the environment variable $SFEED_PIPER.
The above screenshot shows the included sfeed_content shellscript which uses the lynx text-browser to convert HTML to plain-text. It pipes the formatted plain-text to the user $PAGER (or "less").
Of course the script can be easily changed to use a different browser or HTML-to-text converter like:
It's easy to modify the color-theme by changing the macros in the source-code or set a predefined theme at compile-time. The README file contains information how to set a theme. On the left a TempleOS-like color-theme on the right a newsboat-like colorscheme.
It supports a vertical layout, horizontal and monocle (full-screen) layout. This can be useful for different kind of screen sizes. The keybinds '1', '2' and '3' can be used to switch between these layouts.
git clone git://git.codemadness.org/sfeed
You can browse the source-code at:
Releases are available at:
$ make
# make install
]]>Last modification on
hurl is a relatively simple HTTP, HTTPS and Gopher client/file grabber.
Sometimes (or most of the time?) you just want to fetch a file via the HTTP, HTTPS or Gopher protocol.
The focus of this tool is only this.
git clone git://git.codemadness.org/hurl
You can browse the source-code at:
Releases are available at:
$ make
# make install
Fetch the Atom feed from this site using a maximum filesize limit of 1MB and a time-out limit of 15 seconds:
hurl -m 1048576 -t 15 "https://codemadness.org/atom.xml"
There is an -H option to add custom headers. This way some of the anti-features listed above are supported. For example some CDNs like Cloudflare are known to block empty or certain User-Agents.
User-Agent:
hurl -H 'User-Agent: some browser' 'https://codemadness.org/atom.xml'
HTTP Basic Auth (base64-encoded username:password):
hurl -H 'Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=' \
'https://codemadness.org/atom.xml'
GZIP (this assumes the served response Content-Type is gzip):
hurl -H 'Accept-Encoding: gzip' 'https://somesite/' | gzip -d
]]>Last modification on
Convert JSON to TSV or separated output.
json2tsv reads JSON data from stdin. It outputs each JSON type to a TAB- Separated Value format per line by default.
The output format per line is:
nodename<TAB>type<TAB>value<LF>
Control-characters such as a newline, TAB and backslash (\n, \t and \) are escaped in the nodename and value fields. Other control-characters are removed.
The type field is a single byte and can be:
Filtering on the first field "nodename" is easy using awk for example.
I wanted a tool that makes parsing JSON easier and work well from the shell, similar to jq.
sed and grep often work well enough for matching some value using some regex pattern, but it is not good enough to parse JSON correctly or to extract all information: just like parsing HTML/XML using some regex is not good (enough) or a good idea :P.
I didn't want to learn a new specific meta-language which jq has and wanted something simpler.
While it is more efficient to embed this query language for data aggregation, it is also less simple. In my opinion it is simpler to separate this and use pattern-processing by awk or an other filtering/aggregating program.
For the parser, there are many JSON parsers out there, like the efficient jsmn parser, however a few parser behaviours I want to have are:
This is why I went for a parser design that uses a single callback per "node" type and keeps track of the current nested structure in a single array and emits that.
git clone git://git.codemadness.org/json2tsv
You can browse the source-code at:
Releases are available at:
$ make
# make install
An usage example to parse posts of the JSON API of reddit.com and format them to a plain-text list using awk:
#!/bin/sh
curl -s -H 'User-Agent:' 'https://old.reddit.com/.json?raw_json=1&limit=100' | \
json2tsv | \
awk -F '\t' '
function show() {
if (length(o["title"]) == 0)
return;
print n ". " o["title"] " by " o["author"] " in r/" o["subreddit"];
print o["url"];
print "";
}
$1 == ".data.children[].data" {
show();
n++;
delete o;
}
$1 ~ /^\.data\.children\[\]\.data\.[a-zA-Z0-9_]*$/ {
o[substr($1, 23)] = $3;
}
END {
show();
}'
Last modification on
This guide describes how to setup a local mirror and installation/upgrade server that requires little or no input interaction.
The HTTP mirror will be used to fetch the base sets and (optional) custom sets. In this guide we will assume 192.168.0.2 is the local installation server and mirror, the CPU architecture is amd64 and the OpenBSD release version is 6.5. We will store the files in the directory with the structure:
http://192.168.0.2/pub/OpenBSD/6.5/amd64/
Create the www serve directory and fetch all sets and install files (if needed to save space *.iso and install65.fs can be skipped):
$ cd /var/www/htdocs
$ mkdir -p pub/OpenBSD/6.5/amd64/
$ cd pub/OpenBSD/6.5/amd64/
$ ftp 'ftp://ftp.nluug.nl/pub/OpenBSD/6.5/amd64/*'
Verify signature and check some checksums:
$ signify -C -p /etc/signify/openbsd-65-base.pub -x SHA256.sig
Setup httpd(8) for simple file serving:
# $FAVORITE_EDITOR /etc/httpd.conf
A minimal example config for httpd.conf(5):
server "*" {
listen on * port 80
}
The default www root directory is: /var/www/htdocs/
Enable the httpd daemon to start by default and start it now:
# rcctl enable httpd
# rcctl start httpd
The installer supports loading responses to the installation/upgrade questions from a simple text file. We can do a regular installation and copy the answers from the saved file to make an automated version of it.
Do a test installation, at the end of the installation or upgrade when asked the question:
Exit to (S)hell, (H)alt or (R)eboot?
Type S to go to the shell. Find the response file for an installation and copy it to some USB stick or write down the response answers:
cp /tmp/i/install.resp /mnt/usbstick/
A response file could be for example:
System hostname = testvm
Which network interface do you wish to configure = em0
IPv4 address for em0 = dhcp
IPv6 address for em0 = none
Which network interface do you wish to configure = done
Password for root account = $2b$10$IqI43aXjgD55Q3nLbRakRO/UAG6SAClL9pyk0vIUpHZSAcLx8fWk.
Password for user testuser = $2b$10$IqI43aXjgD55Q3nLbRakRO/UAG6SAClL9pyk0vIUpHZSAcLx8fWk.
Start sshd(8) by default = no
Do you expect to run the X Window System = no
Setup a user = testuser
Full name for user testuser = testuser
What timezone are you in = Europe/Amsterdam
Which disk is the root disk = wd0
Use (W)hole disk MBR, whole disk (G)PT, (O)penBSD area or (E)dit = OpenBSD
Use (A)uto layout, (E)dit auto layout, or create (C)ustom layout = a
Location of sets = http
HTTP proxy URL = none
HTTP Server = 192.168.0.2
Server directory = pub/OpenBSD/6.5/amd64
Unable to connect using https. Use http instead = yes
Location of sets = http
Set name(s) = done
Location of sets = done
Exit to (S)hell, (H)alt or (R)eboot = R
Get custom encrypted password for response file:
$ printf '%s' 'yourpassword' | encrypt
rdsetroot(8) is publicly exposed now in base since 6.5. Before 6.5 it is available in the /usr/src/ tree as elfrdsetroot, see also the rd(4) man page.
$ mkdir auto
$ cd auto
$ cp pubdir/bsd.rd .
$ rdsetroot -x bsd.rd disk.fs
# vnconfig vnd0 disk.fs
# mkdir mount
# mount /dev/vnd0a mount
Copy the response file (install.resp) to: mount/auto_install.conf (installation) or mount/auto_upgrade.conf (upgrade), but not both. In this guide we will do an auto-installation.
Unmount, detach and patch RAMDISK:
# umount mount
# vnconfig -u vnd0
$ rdsetroot bsd.rd disk.fs
To test copy bsd.rd to the root of some testmachine like /bsd.test.rd then (re)boot and type:
boot /bsd.test.rd
In the future (6.5+) it will be possible to copy to a file named "/bsd.upgrade" in the root of a current system and automatically load the kernel: See the script bsd.upgrade in CVS. Of course this is possible with PXE boot or some custom USB/ISO also. As explained in the autoinstall(8) man page: create either an auto_upgrade.conf or an auto_install.conf, but not both.
In this example the miniroot will boot the custom kernel, but fetch all the sets from the local network.
We will base our miniroot of the official version: miniroot65.fs.
We will create a 16MB miniroot to boot from (in this guide it is assumed the original miniroot is about 4MB and the modified kernel image fits in the new allocated space):
$ dd if=/dev/zero of=new.fs bs=512 count=32768
Copy first part of the original image to the new disk (no truncation):
$ dd conv=notrunc if=miniroot65.fs of=new.fs
# vnconfig vnd0 new.fs
Expand disk OpenBSD boundaries:
# disklabel -E vnd0
> b
Starting sector: [1024]
Size ('*' for entire disk): [8576] *
> r
Total free sectors: 1168.
> c a
Partition a is currently 8576 sectors in size, and can have a maximum
size of 9744 sectors.
size: [8576] *
> w
> q
or:
# printf 'b\n\n*\nc a\n*\nw\n' | disklabel -E vnd0
Grow filesystem and check it and mark as clean:
# growfs -y /dev/vnd0a
# fsck -y /dev/vnd0a
Mount filesystem:
# mount /dev/vnd0a mount/
The kernel on the miniroot is GZIP compressed. Compress our modified bsd.rd and overwrite the original kernel:
# gzip -c9n bsd.rd > mount/bsd
Or to save space (+- 500KB) by stripping debug symbols, taken from bsd.gz target in this Makefile.
$ cp bsd.rd bsd.strip
$ strip bsd.strip
$ strip -R .comment -R .SUNW_ctf bsd.strip
$ gzip -c9n bsd.strip > bsd.gz
$ cp bsd.gz mount/bsd
Now unmount and detach:
# umount mount/
# vnconfig -u vnd0
Now you can dd(1) the image new.fs to your bootable (USB) medium.
For patching /etc/rc.firsttime and other system files it is useful to use a customized installation set like siteVERSION.tgz, for example: site65.tgz. The sets can even be specified per host/MAC address like siteVERSION-$(hostname -s).tgz so for example: site65-testvm.tgz
When the installer checks the base sets of the mirror it looks for a file index.txt. To add custom sets the site entries have to be added.
For example:
-rw-r--r-- 1 1001 0 4538975 Oct 11 13:58:26 2018 site65-testvm.tgz
The filesize, permissions etc do not matter and are not checked by the installer. Only the filename is matched by a regular expression.
If you have custom sets without creating a signed custom release you will be prompted for the messages:
checksum test failed
and:
unverified sets: continue without verification
OpenBSD uses the program signify(1) to cryptographically sign and verify filesets.
To create a custom public/private keypair (ofcourse make sure to store the private key privately):
$ signify -G -n -c "Custom 6.5 install" -p custom-65-base.pub -s custom-65-base.sec
Create new checksum file with filelist of the current directory (except SHA256* files):
$ printf '%s\n' * | grep -v SHA256 | xargs sha256 > SHA256
Sign SHA256 and store as SHA256.sig, embed signature:
$ signify -S -e -s /privatedir/custom-65-base.sec -m SHA256 -x SHA256.sig
Verify the created signature and data is correct:
$ signify -C -p /somelocation/custom-65-base.pub -x SHA256.sig
Copy only the public key to the RAMDISK:
$ cp custom-65-base.pub mount/etc/signify/custom-65-base.pub
Now we have to patch the install.sub file to check our public key. If you know a better way without having to patch this script, please let me know.
Change the variable PUB_KEY in the shellscript mount/install.sub from:
PUB_KEY=/etc/signify/openbsd-${VERSION}-base.pub
To:
PUB_KEY=/etc/signify/custom-${VERSION}-base.pub
And for upgrades from:
$UPGRADE_BSDRD &&
PUB_KEY=/mnt/etc/signify/openbsd-$((VERSION + 1))-base.pub
To:
$UPGRADE_BSDRD &&
PUB_KEY=/mnt/etc/signify/custom-$((VERSION + 1))-base.pub
Last modification on
Idiotbox is a less resource-heavy Youtube interface. For viewing videos it is recommended to use it with mpv or mplayer with youtube-dl or yt-dlp.
For more (up-to-date) information see the README file.
In my opinion the standard Youtube web interface is:
git clone git://git.codemadness.org/frontends
You can browse the source-code at:
Releases are available at:
You can view it here: https://codemadness.org/idiotbox/
For example you can search using the query string parameter "q": https://codemadness.org/idiotbox/?q=gunther+tralala
The gopher version is here: gopher://codemadness.org/7/idiotbox.cgi
]]>Last modification on
For fun I wrote a small HTTP Gopher proxy CGI program in C. It only supports the basic Gopher types and has some restrictions to prevent some abuse.
For your regular Gopher browsing I recommend the simple Gopher client sacc.
For more information about Gopher check out gopherproject.org.
git clone git://git.codemadness.org/gopherproxy-c
You can browse the source-code at:
You can view it here: https://codemadness.org/gopherproxy/
For example you can also view my gopherhole using the proxy, the query string parameter "q" reads the URI: https://codemadness.org/gopherproxy/?q=codemadness.org
]]>Last modification on
Make sure to setup SSH public key authentication so you don't need to enter a password each time and have a more secure authentication.
For example in the file $HOME/.ssh/config:
Host codemadness
Hostname codemadness.org
Port 22
IdentityFile ~/.ssh/codemadness/id_rsa
Of course also make sure to generate the private and public keys.
Make an alias or function in your shell config:
pastesrv() {
ssh user@codemadness "cat > /your/www/publicdir/paste/$1"
echo "https://codemadness.org/paste/$1"
}
This function reads any data from stdin and transfers the output securely via SSH and writes it to a file at the specified path. This path can be visible via HTTP, gopher or an other protocol. Then it writes the absolute URL to stdout, this URL can be copied to the clipboard and pasted anywhere like to an e-mail, IRC etc.
To use it, here are some examples:
Create a patch of the last commit in the git repo and store it:
git format-patch --stdout HEAD^ | pastesrv 'somepatch.diff'
Create a screenshot of your current desktop and paste it:
xscreenshot | ff2png | pastesrv 'screenshot.png'
There are many other uses of course, use your imagination :)
]]>Last modification on
This article assumes you use OpenBSD for the service files and OS-specific examples.
A good reason to host your own git repositories is because of having and keeping control over your own computing infrastructure.
Some bad examples:
The same thing can happen with Github, Atlassian Bitbucket or other similar services. After all: they are just a company with commercial interests. These online services also have different pricing plans and various (arbitrary) restrictions. When you host it yourself the restrictions are the resource limits of the system and your connection, therefore it is a much more flexible solution.
Always make sure you own the software (which is Free or open-source) and you can host it yourself, so you will be in control of it.
For the hosting it is recommended to use a so-called "bare" repository. A bare repository means no files are checked out in the folder itself. To create a bare repository use git init with the --bare argument:
$ git init --bare
I recommend to create a separate user and group for the source-code repositories. In the examples we will assume the user is called "src".
Login as the src user and create the files. To create a directory for the repos, in this example /home/src/src:
$ mkdir -p /home/src/src
$ cd /home/src/src
$ git init --bare someproject
$ $EDITOR someproject/description
Make sure the git-daemon process has access permissions to these repositories.
Using git-daemon you can clone the repositories publicly using the efficient git:// protocol. An alternative without having to use git-daemon is by using (anonymous) SSH, HTTPS or any public shared filesystem.
When you use a private-only repository I recommend to just use SSH without git-daemon because it is secure.
Install the git package. The package should contain "git daemon":
# pkg_add git
Enable the daemon:
# rcctl enable gitdaemon
Set the gitdaemon service flags to use the src directory and use all the available repositories in this directory. The command-line flags "--export-all" exports all repositories in the base path. Alternatively you can use the "git-daemon-export-ok" file (see the git-daemon man page).
# rcctl set gitdaemon flags --export-all --base-path="/home/src/src"
To configure the service to run as the user _gitdaemon:
# rcctl set gitdaemon user _gitdaemon
To run the daemon directly as the user _gitdaemon (without dropping privileges from root to the user) set the following flags in /etc/rc.d/gitdaemon:
daemon_flags="--user=_gitdaemon"
Which will also avoid this warning while cloning:
"can't access /root/.git/config"
Now start the daemon:
# rcctl start gitdaemon
To test and clone the repository do:
$ git clone git://yourdomain/someproject
if you skipped the optional git-daemon installation then just clone via SSH:
$ git clone ssh://youraccount@yourdomain:/home/src/src/someproject
When cloning via SSH make sure to setup private/public key authentication for security and convenience.
You should also make sure the firewall allows connections to the services like the git daemon, HTTPd or SSH, for example using OpenBSD pf something like this can be set in /etc/pf.conf:
tcp_services="{ ssh, gopher, http, https, git }"
pass in on egress proto tcp from any to (egress) port $tcp_services
Add the repository as a remote:
$ git remote add myremote ssh://youraccount@yourdomain:/home/src/src/someproject
Then push the changes:
$ git push myremote master:master
Sometimes it's nice to browse the git history log of the repository in a web browser or some other program without having to look at the local repository.
It's also possible with these tools to generate an Atom feed and then use a RSS/Atom reader to track the git history:
My sfeed program can be used as a RSS/Atom reader.
Using git hooks you can setup automated triggers, for example when pushing to a repository. Some useful examples can be:
Last modification on
This describes how to setup an OpenBSD SPARC64 VM in QEMU.
To create a 5GB disk image:
qemu-img create -f qcow2 fs.qcow2 5G
In this guide we'll use the installation ISO to install OpenBSD. Make sure to download the latest (stable) OpenBSD ISO, for example install62.iso.
Start the VM:
#!/bin/sh
LC_ALL=C QEMU_AUDIO_DRV=none \
qemu-system-sparc64 \
-machine sun4u,usb=off \
-realtime mlock=off \
-smp 1,sockets=1,cores=1,threads=1 \
-rtc base=utc \
-m 1024 \
-boot c \
-drive file=fs.qcow2,if=none,id=drive-ide0-0-1,format=qcow2,cache=none \
-cdrom install62.iso \
-device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-1,id=ide0-0-1 \
-msg timestamp=on \
-serial pty -nographic \
-net nic,model=ne2k_pci -net user
The VM has the following properties:
From your host connect to the serial device indicated by QEMU, for example:
(qemu) 2017-11-19T15:14:20.884312Z qemu-system-sparc64: -serial pty: char device redirected to /dev/ttyp0 (label serial0)
Then you can use the serial terminal emulator cu to attach:
cu -l /dev/ttyp0
Another option could be using the simple terminal(st) from suckless.
st -l /dev/ttyp0
using cu to detach the cu(1) man page says:
Typed characters are normally transmitted directly to the remote machine (which
does the echoing as well). A tilde ('~') appearing as the first character of a
line is an escape signal; the following are recognized:
~^D or ~. Drop the connection and exit. Only the connection is
the login session is not terminated.
On boot you have to type:
root device: wd0a
for swap use the default (wd0b) Press enter
Automatic network configuration using DHCP
echo "dhcp" > /etc/hostname.ne0
To bring up the interface (will be automatic on the next boot):
sh /etc/netstart
Add a mirror to /etc/installurl for package installation. Make sure to lookup the most efficient/nearby mirror site on the OpenBSD mirror page.
echo "https://ftp.hostserver.de/pub/OpenBSD" > /etc/installurl
]]>Last modification on
Tscrape is a Twitter web scraper and archiver.
Twitter removed the functionality to follow users using a RSS feed without authenticating or using their API. With this program you can format tweets in any way you like relatively anonymously.
For more (up-to-date) information see the README file.
git clone git://git.codemadness.org/tscrape
You can browse the source-code at:
Releases are available at:
Output format examples:
]]>Last modification on
This is a small datatable Javascript with no dependencies.
It was created because all the other datatable scripts suck balls.
Most Javascripts nowadays have a default dependency on jQuery, Bootstrap or other frameworks.
jQuery adds about 97KB and Bootstrap adds about 100KB to your scripts and CSS as a dependency. This increases the CPU, memory and bandwidth consumption and latency. It also adds complexity to your scripts.
jQuery was mostly used for backwards-compatibility in the Internet Explorer days, but is most often not needed anymore. It contains functionality to query the DOM using CSS-like selectors, but this is now supported with for example document.querySelectorAll. Functionality like a JSON parser is standard available now: JSON.parse().
All sizes are not "minified" or gzipped.
Name | Total | JS | CSS | Images | jQuery
---------------------------------+---------+---------+-------+--------+-------
jsdatatable | 12.9KB | 9.1KB | 2.5KB | 1.3KB | -
datatables.net (without plugins) | 563.4KB | 449.3KB | 16KB | 0.8KB | 97.3KB
jdatatable | 154.6KB | 53KB | 1KB | 3.3KB | 97.3KB
Of course jsdatatable has less features (less is more!), but it does 90% of what's needed. Because it is so small it is also much simpler to understand and extend with required features if needed.
See also: The website obesity crisis
git clone git://git.codemadness.org/jscancer
You can browse the source-code at:
It is in the datatable directory.
Releases are available at:
See example.html for an example. A stylesheet file datatable.css is also included, it contains the icons as embedded images.
A table should have the classname "datatable" set, it must contain a <thead> for the column headers (<td> or <th>) and <tbody> element for the data. The minimal code needed for a working datatable:
<html>
<body>
<input class="filter-text" /><!-- optional -->
<table class="datatable">
<thead><!-- columns -->
<tr><td>Click me</td></tr>
</thead>
<tbody><!-- data -->
<tr><td>a</td></tr>
<tr><td>b</td></tr>
</tbody>
</table>
<script type="text/javascript" src="datatable.js"></script>
<script type="text/javascript">var datatables = datatable_autoload();</script>
</body>
</html>
The following column attributes are supported:
By default only parsing for the types: date, float, int and string are supported, but other types can be easily added as a function with the name: datatable_parse_<typename>(). The parse functions parse the data-value attribute when set or else the cell content (in order). Because of this behaviour you can set the actual values as the data-value attribute and use the cell content for display. This is useful to display and properly sort locale-aware currency, datetimes etc.
Filtering will be done case-insensitively on the cell content and when set also on the data-value attribute. The filter string is split up as tokens separated by space. Each token must match at least once per row to display it.
Sorting is done on the parsed values by default with the function: datatable_sort_default(). To change this you can set a customname string on the data-sort attribute on the column which translates to the function: datatable_sort_<customname>().
In some applications locale values are used, like for currency, decimal numbers datetimes. Some people also like to use icons or extended HTML elements inside the cell. Because jsdatatable sorts on the parsed value (see section PARSING) it is possible to sort on the data-value attribute values and use the cell content for display.
For example:
To update data dynamically see example-ajax.html for an example how to do this.
For the below example to work you need to have Javascript enabled.
]]>Last modification on
stagit-gopher is a static page generator for Gopher. It creates the pages as static geomyidae .gph files. stagit-gopher is a modified version from the HTML version of stagit.
Read the README for more information about it.
I also run a gopherhole and stagit-gopher, you can see how it looks here: gopher://codemadness.org/1/git/
sacc is a good Gopher client to view it.
This is by design, just use git locally.
git clone git://git.codemadness.org/stagit-gopher
You can browse the source-code at:
Releases are available at:
]]>Last modification on
Saait is the most boring static HTML page generator.
Meaning of saai (dutch): boring. Pronunciation: site
Read the README for more information about it.
I used to use shellscripts to generate the static pages, but realised I wanted a small program that works on each platform consistently. There are many incompatibilities or unimplemented features in base tools across different platforms: Linux, UNIX, Windows.
This site is created using saait.
git clone git://git.codemadness.org/saait
You can browse the source-code at:
Releases are available at:
Below is the saait(1) man page, which includes usage examples.
SAAIT(1) General Commands Manual SAAIT(1)
NAME
saait the most boring static page generator
SYNOPSIS
saait [-c configfile] [-o outputdir] [-t templatesdir] pages...
DESCRIPTION
saait writes HTML pages to the output directory.
The arguments pages are page config files, which are processed in the
given order.
The options are as follows:
-c configfile
The global configuration file, the default is "config.cfg". Each
page configuration file inherits variables from this file. These
variables can be overwritten per page.
-o outputdir
The output directory, the default is "output".
-t templatesdir
The templates directory, the default is "templates".
DIRECTORY AND FILE STRUCTURE
A recommended directory structure for pages, although the names can be
anything:
pages/001-page.cfg
pages/001-page.html
pages/002-page.cfg
pages/002-page.html
The directory and file structure for templates must be:
templates/<templatename>/header.ext
templates/<templatename>/item.ext
templates/<templatename>/footer.ext
The following filename prefixes are detected for template blocks and
processed in this order:
"header."
Header block.
"item."
Item block.
"footer."
Footer block.
The files are saved as output/<templatename>, for example
templates/atom.xml/* will become: output/atom.xml. If a template block
file does not exist then it is treated as if it was empty.
Template directories starting with a dot (".") are ignored.
The "page" templatename is special and will be used per page.
CONFIG FILE
A config file has a simple key=value configuration syntax, for example:
# this is a comment line.
filename = example.html
title = Example page
description = This is an example page
created = 2009-04-12
updated = 2009-04-14
The following variable names are special with their respective defaults:
contentfile
Path to the input content filename, by default this is the path
of the config file with the last extension replaced to ".html".
filename
The filename or relative file path for the output file for this
page. By default the value is the basename of the contentfile.
The path of the written output file is the value of filename
appended to the outputdir path.
A line starting with # is a comment and is ignored.
TABs and spaces before and after a variable name are ignored. TABs and
spaces before a value are ignored.
TEMPLATES
A template (block) is text. Variables are replaced with the values set
in the config files.
The possible operators for variables are:
$ Escapes a XML string, for example: < to the entity <.
# Literal raw string value.
% Insert contents of file of the value of the variable.
For example in a HTML item template:
<article>
<header>
<h1><a href="">${title}</a></h1>
<p>
<strong>Last modification on </strong>
<time datetime="${updated}">${updated}</time>
</p>
</header>
%{contentfile}
</article>
EXIT STATUS
The saait utility exits 0 on success, and >0 if an error occurs.
EXAMPLES
A basic usage example:
1. Create a directory for a new site:
mkdir newsite
2. Copy the example pages, templates, global config file and example
stylesheets to a directory:
cp -r pages templates config.cfg style.css print.css newsite/
3. Change the current directory to the created directory.
cd newsite/
4. Change the values in the global config.cfg file.
5. If you want to modify parts of the header, like the navigation menu
items, you can change the following two template files:
templates/page/header.html
templates/index.html/header.html
6. Create any new pages in the pages directory. For each config file
there has to be a corresponding HTML file. By default this HTML
file has the path of the config file, but with the last extension
(".cfg" in this case) replaced to ".html".
7. Create an output directory:
mkdir -p output
8. After any modifications the following commands can be used to
generate the output and process the pages in descending order:
find pages -type f -name '*.cfg' -print0 | sort -zr | xargs -0 saait
9. Copy the modified stylesheets to the output directory also:
cp style.css print.css output/
10. Open output/index.html locally in your webbrowser to review the
changes.
11. To synchronize files, you can securely transfer them via SSH using
rsync:
rsync -av output/ user@somehost:/var/www/htdocs/
TRIVIA
The most boring static page generator.
Meaning of saai (dutch): boring, pronunciation of saait: site
SEE ALSO
find(1), sort(1), xargs(1)
AUTHORS
Hiltjo Posthuma <hiltjo@codemadness.org>
]]>Last modification on
stagit is a static page generator for git.
Read the README for more information about it.
My git repository uses stagit, you can see how it looks here: https://codemadness.org/git/
In these cases it is better to use cgit or possibly change stagit to run as a CGI program.
This is by design, just use git locally.
git clone git://git.codemadness.org/stagit
You can browse the source-code at:
Releases are available at:
]]>Last modification on
This is a guide to get cgit working with OpenBSD httpd(8) and slowcgi(8) in base. OpenBSD httpd is very simple to setup, but nevertheless this guide might help someone out there.
Install the cgit package:
# pkg_add cgit
or build it from ports:
# cd /usr/ports/www/cgit && make && make install
An example of httpd.conf(5): httpd.conf.
By default the slowcgi UNIX domain socket is located at: /var/www/run/slowcgi.sock. For this example we use the defaults.
The cgit binary should be located at: /var/www/cgi-bin/cgit.cgi (default).
cgit uses the $CGIT_CONFIG environment variable to locate its config. By default on OpenBSD this is set to /conf/cgitrc (chroot), which is /var/www/conf/cgitrc. An example of the cgitrc file is here: cgitrc.
In this example the cgit cache directory is set to /cgit/cache (chroot), which is /var/www/cgit/cache. Make sure to give this path read and write permissions for cgit (www:daemon).
In the example the repository path (scan-path) is set to /htdocs/src (chroot), which is /var/www/htdocs/src.
The footer file is set to /conf/cgit.footer. Make sure this file exists or you will get warnings:
# >/var/www/conf/cgit.footer
Make sure cgit.css (stylesheet) and cgit.png (logo) are accessible, by default: /var/www/cgit/cgit.{css,png} (location can be changed in httpd.conf).
To support .tar.gz snapshots a static gzip binary is required in the chroot /bin directory:
cd /usr/src/usr.bin/compress
make clean && make LDFLAGS="-static -pie"
cp obj/compress /var/www/bin/gzip
Enable the httpd and slowcgi services to automatically start them at boot:
# rcctl enable httpd slowcgi
Start the services:
# rcctl start httpd slowcgi
]]>Last modification on
Update: as of 2020-05-06: I stopped maintaining it. Twitch now requires OAUTH and 2-factor authentication. It requires me to expose personal information such as my phone number.
Update: as of ~2020-01-03: I rewrote this application from Golang to C. The Twitch Kraken API used by the Golang version was deprecated. It was rewritten to use the Helix API.
This program/script allows to view streams in your own video player like so the bloated Twitch interface is not needed. It is written in C.
git clone git://git.codemadness.org/frontends
You can browse the source-code at:
]]>Last modification on
This is an userscript I wrote a while ago which allows to focus the first input field on a page with ctrl+space. This is useful if a site doesn't specify the autofocus attribute for an input field and you don't want to switch to it using the mouse.
Last modification on
This is an userscript I wrote a while ago which circumvents requiring to login with an account on Youtube if a video requires age verification.
Note: this is an old script and does not work anymore.
Last modification on
This is an userscript I wrote a while ago which white-lists fonts I like and blocks the rest. The reason I made this is because I don't like the inconsistency of custom fonts used on a lot of websites.
Download userscript Block_stupid_fonts_v1.2.user.js
Old version: Download userscript Block_stupid_fonts.user.js
]]>Last modification on
Sfeed is a RSS and Atom parser (and some format programs).
It converts RSS or Atom feeds from XML to a TAB-separated file. There are formatting programs included to convert this TAB-separated format to various other formats. There are also some programs and scripts included to import and export OPML and to fetch, filter, merge and order feed items.
For the most (up-to-date) information see the README.
git clone git://git.codemadness.org/sfeed
You can browse the source-code at:
Releases are available at:
$ make
# make install
The above screenshot uses the sfeed_plain format program with dmenu. This program outputs the feed items in a compact way per line as plain-text to stdout. The dmenu program reads these lines from stdin and displays them as a X11 list menu. When an item is selected in dmenu it prints this item to stdout. A simple written script can then filter for the URL in this output and do some action, like opening it in some browser or open a podcast in your music player.
For example:
#!/bin/sh
url=$(sfeed_plain "$HOME/.sfeed/feeds/"* | dmenu -l 35 -i | \
sed -n 's@^.* \([a-zA-Z]*://\)\(.*\)$@\1\2@p')
test -n "${url}" && $BROWSER "${url}"
However this is just one way to format and interact with feed items. See also the README for other practical examples.
Below are some examples of output that are supported by the included format programs:
There is also a curses UI front-end, see the page sfeed_curses. It is now part of sfeed.
Here are some videos of other people showcasing some of the functionalities of sfeed, sfeed_plain and sfeed_curses. To the creators: thanks for making these!
Last modification on
This is a dark theme I made for vim. This is a theme I personally used for quite a while now and over time tweaked to my liking. It is made for gvim, but also works for 16-colour terminals (with small visual differences). The relaxed.vim file also has my .Xdefaults file colours listed at the top for 16+-colour terminals on X11.
It is inspired by the "desert" theme available at https://www.vim.org/scripts/script.php?script_id=105, although I removed the cursive and bold styles and changed some colours I didn't like.
Last modification on
Seturgent is a small utility to set an application its urgency hint. For most windowmanager's and panel applications this will highlight the application and will allow special actions.
git clone git://git.codemadness.org/seturgent
You can browse the source-code at:
Releases are available at:
]]>Last modification on
DWM is a very minimal windowmanager. It has the most essential features I need, everything else is "do-it-yourself" or extending it with the many available patches. The vanilla version is less than 2000 SLOC. This makes it easy to understand and modify it.
I really like my configuration at the moment and want to share my changes. Some of the features listed below are patches from suckless.org I applied, but there are also some changes I made.
This configuration is entirely tailored for my preferences of course.
git clone -b hiltjo git://git.codemadness.org/dwm
Last modification on
Today I was doing some web development and wanted to see all the rules in a stylesheet (CSS) that were not used for the current document. I wrote the following Javascript code which you can paste in the Firebug console and run:
(function() {
for (var i=0;i<document.styleSheets.length;i++) {
var rules = document.styleSheets[i].cssRules || [];
var sheethref = document.styleSheets[i].href || 'inline';
for (var r=0;r<rules.length;r++)
if (!document.querySelectorAll(rules[r].selectorText).length)
console.log(sheethref + ': "' + rules[r].selectorText + '" not found.');
}
})();
This will output all the (currently) unused CSS rules per selector, the output can be for example:
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: "fieldset, a img" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: "#headerimg" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: "a:hover" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: "h2 a:hover, h3 a:hover" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: ".postmetadata-center" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: ".thread-alt" not found.
Just a trick I wanted to share, I hope someone finds this useful :)
For webkit-based browsers you can use "Developer Tools" and use "Audits" under "Web Page Performance" it says "Remove unused CSS rules". For Firefox there is also Google Page Speed: https://code.google.com/speed/page-speed/ this adds an extra section under Firebug.
Tested on Chrome and Firefox.
]]>Last modification on
Update: the DXTC patent expired on 2018-03-16, many distros enable this by default now.
S3TC (also known as DXTn or DXTC) is a patented lossy texture compression algorithm. See: https://en.wikipedia.org/wiki/S3TC for more detailed information. Many games use S3TC and if you use Wine to play games you definitely want to enable it if your graphics card supports it.
Because this algorithm was patented it is disabled by default on many Linux distributions.
To enable it you can install the library "libtxc" if your favorite OS has not installed it already.
For easy configuration you can install the optional utility DRIconf, which you can find at: https://dri.freedesktop.org/wiki/DriConf. DriConf can safely be removed after configuration.
Install libtxc_dxtn:
ArchLinux:
# pacman -S libtxc_dxtn
Debian:
# aptitude install libtxc-dxtn-s2tc0
Install driconf (optional):
ArchLinux:
# pacman -S driconf
Debian:
# aptitude install driconf
Run driconf and enable S3TC:
Last modification on
NOTE: this guide is obsolete, a working driver is now included in the Linux kernel tree (since Linux 2.6.31)
A USB to powerline bridge is a network device that instead of using an ordinary Ethernet cable (CAT5 for example) or wireless LAN it uses the powerlines as a network to communicate with similar devices. A more comprehensive explanation of what it is and how it works you can find here: https://en.wikipedia.org/wiki/IEEE_1901.
Known products that use the Intellon 51x1 chipset:
To check if your device is supported:
$ lsusb | grep -i 09e1
Bus 001 Device 003: ID 09e1:5121 Intellon Corp.
If the vendor (09e1) and product (5121) ID match then it's probably supported.
Get drivers from the official site: http://www.devolo.co.uk/consumer/downloads-44-microlink-dlan-usb.html?l=en or mirrored here. The drivers from the official site were/are more up-to-date.
Extract them:
$ tar -xzvf dLAN-linux-package-v4.tar.gz
Go to the extracted directory and compile them:
$ ./configure
$ make
Depending on the errors you got you might need to download and apply my patch:
$ cd dLAN-linux-package-v4/ (or other path to the source code)
$ patch < int51x1.patch
Try again:
$ ./configure
$ make
If that failed try:
$ ./configure
$ KBUILD_NOPEDANTIC=1 make
If that went OK install the drivers (as root):
# make install
Check if the "devolo_usb" module is loaded:
$ lsmod | grep -i devolo_usb
If it shows up then it's loaded. Now check if the interface is added:
$ ifconfig -a | grep -i dlanusb
dlanusb0 Link encap:Ethernet HWaddr 00:12:34:56:78:9A
It is assumed you use a static IP, otherwise you can just use your DHCP client to get an unused IP address from your DHCP server. Setting up the interface is done like this (change the IP address and netmask accordingly if it's different):
# ifconfig dlanusb0 192.168.2.12 netmask 255.255.255.0
Try to ping an IP address on your network to test for a working connection:
$ ping 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=30 time=2.49 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=30 time=3.37 ms
64 bytes from 192.168.2.1: icmp_seq=3 ttl=30 time=2.80 ms
--- 192.168.2.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 2.497/2.891/3.374/0.368 ms
You can now set up a network connection like you normally do with any Ethernet device. The route can be added like this for example:
# route add -net 0.0.0.0 netmask 0.0.0.0 gw 192.168.2.1 dlanusb0
Change the IP address of your local gateway accordingly. Also make sure your nameserver is set in /etc/resolv.conf, something like:
nameserver 192.168.2.1
Test your internet connection by doing for example:
$ ping codemadness.org
PING codemadness.org (64.13.232.151) 56(84) bytes of data.
64 bytes from acmkoieeei.gs02.gridserver.com (64.13.232.151): icmp_seq=1 ttl=52 time=156 ms
64 bytes from acmkoieeei.gs02.gridserver.com (64.13.232.151): icmp_seq=2 ttl=52 time=156 ms
64 bytes from acmkoieeei.gs02.gridserver.com (64.13.232.151): icmp_seq=3 ttl=52 time=155 ms
--- codemadness.org ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 155.986/156.312/156.731/0.552 ms
If this command failed you probably have not setup your DNS/gateway properly. If it worked then good for you :)
Last modification on
Disclaimer: Some (including myself) may find some of these hints/exploits cheating. This guide is just for educational and fun purposes. Some of these hints/tips apply to Gothic 2 as well. I got the meat exploit from a guide somewhere on the internet I can't recall where, anyway kudos to that person. Some of the exploits I discovered myself.
Gothic supports widescreen resolutions with a small tweak, add the following text string as a command-line argument:
-zRes:1920,1200,32
This also works for Gothic 2. Here 1920 is the width, 1200 the height and 32 the bits per pixel, change this to your preferred resolution.
Disable steam overlay. If that doesn't work rename GameOverlayRenderer.dll in your steam folder to _GameOverlayRenderer.dll. I strongly recommend to buy the better version from GOG.com. The GOG version has no DRM and allows easier modding, it also allows playing in most published languages: German, English, Polish, furthermore it has some original artwork and soundtrack included.
You can install the Gothic playerkit and patches to remove the Steam DRM.
WorldOfGothic playerkit patches:
If you're like me and have played the English version many times, but would like to hear the (original) German voice audio or if you would like to play with different audio than you're used to, then you can copy the speech.vdf file of your preferred version to your game files. Optionally turn on subtitles. I've used this to play the English version of Gothic with the original German voice audio and English subtitles. This works best with the version from GOG as it allows easier modding.
At night attack Huno the smith in the Old Camp and steal all his steel. Then make some weapons and sell them with a merchant. When you ask Huno about blacksmith equipment it will respawn with 5 of each kind of steel. This is also a fairly good starting weapon (requires 20 strength). Also his chest located near the sharpening stone and fire contains some steel as well, lock-pick it. The combination is: RRLRLL. The chest contains at least 20 raw steel, forge it to get 20 crude swords which you can sell for 50 ore each to a merchant. This will generate some nice starting money (1000+ ore) :)
This tip is useful for getting pretty good starting weapons.
Before entering the castle itself drop your ore (Left control + down for me) in front of it. This will ensure when you get caught (and you probably will ;)) no ore will get stolen by the guards. Now use the "slip past guard" technique described below and you should be able to get into Gomez his castle. Run to the left where some weapons are stored. Now make sure you at least steal the best weapon (battle sword) and steal as much as you can until you get whacked. I usually stand in the corner since that's where the best weapons are (battle sword, judgement sword, etc). You'll now have some nice starting weapon(s) and the good thing is they require very little attributes (about 13 strength).
In the New Camp go to the mine and talk to Swiney at the bottom of "The Hollow". Ask who he is and then ask to join the scrapers. He will give you a "Diggers dress" worth 250 ore. It has the following stats: + 10 against weapons. + 5 against fire. This will also give you free entrance to the bar in the New Camp.
In the quest from Lefty you will be assigned to get water bottles from the rice lord. He will give you infinite amounts of water bottles, in batches of 12.
In the Old Camp in the main castle there are at least 3 chests with valuable items that don't require a key:
In the swamp-weed harvest quest you must get swamp-weed for a guru. After this quest you can get the harvest again, but you can keep the harvest without consequences.
This exploit is really simple, just draw your weapon before you're "targeted" by the guard and run past them, this bypasses the dialog sequence. When you're just out of their range holster your weapon again, so the people around won't get pissed off.
Works really well on the guards in front of the Old camp's castle, Y'Berrion templars and New Camp mercenaries near the Water magicians, just to name a few.
Go to a pan and focus / target it so it says "frying pan" or similar. Now open your inventory and select the meat. Now cook the meat (for me Left Control + Arrow up). The inventory should remain open. You'll now have twice as much meat as you had before. Do this a few times and you'll have a lot of meat, easy for trading with ore/other items as well. This exploit does not work with the community patch applied.
When you fall or jump from where you usually get fall damage you can do the following trick: slightly before the ground use left or right strafe. This works because it resets the falling animation. There are also other ways to achieve the same thing cancelling the falling animation, such as attacking with a weapon in the air.
You get an additional 750 exp (from Lares) when you forge the letter in the new camp and then give it to Diego. You can still join both camps after this.
An easy way to get more experience is to let the skeleton mages summon as much skeletons as they can, instead of rushing to kill the summoner immediately. After you have defeated all of them: kill the skeleton mage.
When you want to get the maximum power at the end of the game you should save up the items that give you a permanent boost. Teachers of strength, dexterity and mana won't train over 100 of each skill. However using potions and quest rewards you can increase this over 100.
You should also look out for the following:
Learn to get extra force into your punch from Horatio (strength +5, this can't be done after level 100 strength). Talking to Jeremiah in the New Camp bar unlocks the dialog option to train strength at Horatio.
Smoke the strongest non-quest joint (+2 mana).
This one is really obvious, but I would like to point out the mummy's on each side where Xardas is located have lots and I mean lots of permanent potions. This will give you a nice boost before the end battle.
Always pick the permanent potion as a reward for quests when you can, for example the quest for delivering the message to the High Fire magicians (mana potion) or the one for fetching the almanac for the Sect Camp. Don't forget to pick up the potions from Riordian the water magician when you're doing the focus stones quest, it contains a strength and dexterity potion (+3).
If you want to talk to a NPC, but some animation of them takes too long (like eating, drinking, smoking) you can sometimes force them out of it by quickly unsheathing/sheathing your weapon.
When in the Old Camp: Baal Parvez can take you to the Sect Camp, he can be found near the campfire near Fisk and Dexter. Mordrag can take you to the New Camp, he can be found near the south gate, slightly after the campfire near Baal Parvez.
When you follow them and when they kill monsters then you also get the experience.
The NPC Wolf in the New Camp sells "The Bloodflies" book for 150 ore. When you read this book you learn how to remove bloodflies parts (without having to spend learning points). After you read the book and learned its skill then you can sell the book back for 75 ore. This investment quickly pays back: Per bloodfly: sting: 25 ore (unsold value), 2x wings (15 ore each unsold value).
The templar Gor Na Drak (usually near the old mine and walks around with another templar): talking to him teaches you how to learn to get secretion from minecrawlers for free.
The spell scroll "Transform into bloodfly" is very useful:
Almost all mummies that are lootable in the game (Orc temple and The Sleeper temple) have really good loot: permanent and regular potions and amulets and rings.
When you use the tips described above Gothic should be a really easy game and you should be able to get at a high(er) level with lots of mana/strength/hp.
Have fun!
]]>