Codemadness blog with various projects and articles about computer-related things https://www.codemadness.org Chess puzzle book generator https://www.codemadness.org/chess-puzzles.html https://www.codemadness.org/chess-puzzles.html 2024-02-02T00:00:00Z Hiltjo Chess puzzle book generator

Last modification on

This was a christmas hack for fun and non-profit. I wanted to write a chess puzzle book generator. Inspired by 1001 Deadly Checkmates by John Nunn, ISBN-13: 978-1906454258, Steps Method workbooks and other puzzle books.

Example output

Terminal version:

curl -s 'https://codemadness.org/downloads/puzzles/index.vt' | less -R

I may or may not periodially update this page :)

Time flies (since Christmas), here is a valentine edition with attraction puzzles (not only checkmates) using the red "love" theme. It is optimized for his and her pleasure:

https://codemadness.org/downloads/puzzles-valentine/

Clone

git clone git://git.codemadness.org/chess-puzzles

Browse

You can browse the source-code at:

Quick overview of how it works

The generate.sh shellscript generates the output and files for the puzzles.

The puzzles used are from the lichess.org puzzle database: https://database.lichess.org/#puzzles

This database is a big CSV file containing the initial board state in the Forsyth-Edwards Notation (FEN) format and the moves in Universal Chess Interface (UCI) format. Each line contains the board state and the initial and solution moves.

The generated index page is a HTML page, it lists the puzzles. Each puzzle on this page is an SVG image. This scalable image format looks good in all resolutions.

Open puzzle data

Lichess is an open-source and gratis website to play on-line chess. There are no paid levels to unlock features. All the software hosting Lichess is open-source and anyone can register and play chess on it for free. Most of the data about the games played is also open.

However, the website depends on your donations or contributions. If you can, please do so.

generate.sh

Reads puzzles from the database and shuffle them. Do some rough sorting and categorization based on difficulty and assign score points.

The random shuffling is done using a hard-coded random seed. This means on the same machine with the same puzzle database it will regenerate the same sequence of random puzzles in a deterministic manner.

It outputs HTML, with support for CSS dark mode and does not require Javascript. It includes a plain-text listing of the solutions in PGN notation for the puzzles. It also outputs .vt files suitable for the terminal. It uses unicode symbols for the chess pieces and RGB color sequence for the board theme

fen.c

This is a program written in C to read and parse the board state in FEN format and read the UCI moves. It can output to various formats.

See the man page for detailed usage information.

fen.c supports the following output formats:

  • ascii - very simple ASCII mode.
  • fen - output FEN of the board state (from FEN and optional played moves).
  • pgn - Portable Game Notation.
  • speak - mode to output a description of the moves in words.
  • SVG - Scalable Vector Graphics image.
  • tty - Terminal output with some markup using escape codes.

fen.c can also run in CGI mode. This can be used on a HTTP server:

Position from game: Rene Letelier Martner - Robert James Fischer, 1960-10-24

Terminal output:

curl -s 'https://codemadness.org/onlyfens?moves=e2e4%20e7e5&output=tty'

Support for Dutch notated PGN and output

For pgn and "speak mode" it has an option to output Dutch notated PGN or speech too.

For example:

  • Queen = Dame (Q -> D), translated: lady.
  • Rook = Toren (R -> T), translated: tower.
  • Bishop = Loper (B -> L), translated: walker.
  • Knight = Paard (N -> P), translated: horse.

Example script to stream games from Lichess

There is an included example script that can stream Lichess games to the terminal. It uses the Lichess API. It will display the board using terminal escape codes. The games are automatically annotated with PGN notation and with text how a human would say the notation. This can also be piped to a speech synthesizer like espeak as audio.

pgn-extract is a useful tool to convert Portable Game Notation (PGN) to Universal Chess Interface (UCI) moves (or do many other useful chess related things!).

Example script to generate an animated gif from PGN

Theres also an example script included that can generate an animated gif from PGN using ffmpeg.

It creates an optimal color palette from the input images and generates an optimized animated gif. The last move (typically some checkmate) is displayed slightly longer.

References and chess related links

]]>
xargs: an example for parallel batch jobs https://www.codemadness.org/xargs.html https://www.codemadness.org/xargs.html 2023-11-22T00:00:00Z Hiltjo xargs: an example for parallel batch jobs

Last modification on

This describes a simple shellscript programming pattern to process a list of jobs in parallel. This script example is contained in one file.

Simple but less optimal example

#!/bin/sh
maxjobs=4

# fake program for example purposes.
someprogram() {
	echo "Yep yep, I'm totally a real program!"
	sleep "$1"
}

# run(arg1, arg2)
run() {
	echo "[$1] $2 started" >&2
	someprogram "$1" >/dev/null
	status="$?"
	echo "[$1] $2 done" >&2
	return "$status"
}

# process the jobs.
j=1
for f in 1 2 3 4 5 6 7 8 9 10; do
	run "$f" "something" &

	jm=$((j % maxjobs)) # shell arithmetic: modulo
	test "$jm" = "0" && wait
	j=$((j+1))
done
wait

Why is this less optimal

This is less optimal because it waits until all jobs in the same batch are finished (each batch contain $maxjobs items).

For example with 2 items per batch and 4 total jobs it could be:

  • Job 1 is started.
  • Job 2 is started.
  • Job 2 is done.
  • Job 1 is done.
  • Wait: wait on process status of all background processes.
  • Job 3 in new batch is started.

This could be optimized to:

  • Job 1 is started.
  • Job 2 is started.
  • Job 2 is done.
  • Job 3 in new batch is started (immediately).
  • Job 1 is done.
  • ...

It also does not handle signals such as SIGINT (^C). However the xargs example below does:

Example

#!/bin/sh
maxjobs=4

# fake program for example purposes.
someprogram() {
	echo "Yep yep, I'm totally a real program!"
	sleep "$1"
}

# run(arg1, arg2)
run() {
	echo "[$1] $2 started" >&2
	someprogram "$1" >/dev/null
	status="$?"
	echo "[$1] $2 done" >&2
	return "$status"
}

# child process job.
if test "$CHILD_MODE" = "1"; then
	run "$1" "$2"
	exit "$?"
fi

# generate a list of jobs for processing.
list() {
	for f in 1 2 3 4 5 6 7 8 9 10; do
		printf '%s\0%s\0' "$f" "something"
	done
}

# process jobs in parallel.
list | CHILD_MODE="1" xargs -r -0 -P "${maxjobs}" -L 2 "$(readlink -f "$0")"

Run and timings

Although the above example is kindof stupid, it already shows the queueing of jobs is more efficient.

Script 1:

time ./script1.sh
[...snip snip...]
real    0m22.095s

Script 2:

time ./script2.sh
[...snip snip...]
real    0m18.120s

How it works

The parent process:

  • The parent, using xargs, handles the queue of jobs and schedules the jobs to execute as a child process.
  • The list function writes the parameters to stdout. These parameters are separated by the NUL byte separator. The NUL byte separator is used because this character cannot be used in filenames (which can contain spaces or even newlines) and cannot be used in text (the NUL byte terminates the buffer for a string).
  • The -L option must match the amount of arguments that are specified for the job. It will split the specified parameters per job.
  • The expression "$(readlink -f "$0")" gets the absolute path to the shellscript itself. This is passed as the executable to run for xargs.
  • xargs calls the script itself with the specified parameters it is being fed. The environment variable $CHILD_MODE is set to indicate to the script itself it is run as a child process of the script.

The child process:

  • The command-line arguments are passed by the parent using xargs.

  • The environment variable $CHILD_MODE is set to indicate to the script itself it is run as a child process of the script.

  • The script itself (ran in child-mode process) only executes the task and signals its status back to xargs and the parent.

  • The exit status of the child program is signaled to xargs. This could be handled, for example to stop on the first failure (in this example it is not). For example if the program is killed, stopped or the exit status is 255 then xargs stops running also.

Description of used xargs options

From the OpenBSD man page: https://man.openbsd.org/xargs

xargs - construct argument list(s) and execute utility

Options explained:

  • -r: Do not run the command if there are no arguments. Normally the command is executed at least once even if there are no arguments.
  • -0: Change xargs to expect NUL ('\0') characters as separators, instead of spaces and newlines.
  • -P maxprocs: Parallel mode: run at most maxprocs invocations of utility at once.
  • -L number: Call utility for every number of non-empty lines read. A line ending in unescaped white space and the next non-empty line are considered to form one single line. If EOF is reached and fewer than number lines have been read then utility will be called with the available lines.

xargs options -0 and -P, portability and historic context

Some of the options, like -P are as of writing (2023) non-POSIX: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/xargs.html. However many systems support this useful extension for many years now.

The specification even mentions implementations which support parallel operations:

"The version of xargs required by this volume of POSIX.1-2017 is required to wait for the completion of the invoked command before invoking another command. This was done because historical scripts using xargs assumed sequential execution. Implementations wanting to provide parallel operation of the invoked utilities are encouraged to add an option enabling parallel invocation, but should still wait for termination of all of the children before xargs terminates normally."

Some historic context:

The xargs -0 option was added on 1996-06-11 by Theo de Raadt, about a year after the NetBSD import (over 27 years ago at the time of writing):

CVS log

On OpenBSD the xargs -P option was added on 2003-12-06 by syncing the FreeBSD code:

CVS log

Looking at the imported git history log of GNU findutils (which has xargs), the very first commit already had the -0 and -P option:

git log

commit c030b5ee33bbec3c93cddc3ca9ebec14c24dbe07
Author: Kevin Dalley <kevin@seti.org>
Date:   Sun Feb 4 20:35:16 1996 +0000

    Initial revision

xargs: some incompatibilities found

  • Using the -0 option empty fields are handled differently in different implementations.
  • The -n and -L option doesn't work correctly in many of the BSD implementations. Some count empty fields, some don't. In early implementations in FreeBSD and OpenBSD it only processed the first line. In OpenBSD it has been improved around 2017.

Depending on what you want to do a workaround could be to use the -0 option with a single field and use the -n flag. Then in each child program invocation split the field by a separator.

References

]]>
Improved Youtube RSS/Atom feed https://www.codemadness.org/youtube-feed.html https://www.codemadness.org/youtube-feed.html 2023-11-20T00:00:00Z Hiltjo Improved Youtube RSS/Atom feed

Last modification on

... improved at least for my preferences ;)

It scrapes the channel data from Youtube and combines it with the parsed Atom feed from the channel on Youtube.

The Atom parser is based on sfeed, with some of the code removed because it is not needed by this program. It scrapes the metadata of the videos from the channel its HTML page and uses my custom JSON parser to convert the Javascript/JSON structure.

This parser is also used by the json2tsv tool. It has few dependencies.

Features

  • Add the video duration to the title to quickly see how long the video is.
  • Filter away Youtube shorts and upcoming videos / announcements: only videos are shown.
  • Supports more output formats: Atom, JSON Feed or sfeed Tab-Separated-Value format.
  • Easy to build and deploy: can be run as a CGI program as a static-linked binary in a chroot.
  • Secure: additionally to running in a chroot it can use pledge(2) and unveil(2) on OpenBSD to restrict system calls and access to the filesystem.

How to use

There is an option to run directly from the command-line or in CGI-mode. When the environment variable $REQUEST_URI is set then it is automatically run in CGI mode.

Command-line usage:

youtube_feed channelid atom
youtube_feed channelid gph
youtube_feed channelid html
youtube_feed channelid json
youtube_feed channelid tsv
youtube_feed channelid txt

CGI program usage:

The last basename part of the URL should be the channelid + the output format extension. It defaults to TSV if there is no extension. The CGI program can be used with a HTTPd or a Gopher daemon such as geomyidae.

For example:

Atom XML:     https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.xml
HTML:         https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.html
JSON:         https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.json
TSV:          https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.tsv
twtxt:        https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.txt
TSV, default: https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw

Gopher dir:   gopher://codemadness.org/1/feed.cgi/UCrbvoMC0zUvPL8vjswhLOSw.gph
Gopher TSV:   gopher://codemadness.org/0/feed.cgi/UCrbvoMC0zUvPL8vjswhLOSw

An OpenBSD httpd.conf using slowcgi as an example:

server "codemadness.org" {
	location "/yt-chan/*" {
		request strip 1
		root "/cgi-bin/yt-chan"
		fastcgi socket "/run/slowcgi.sock"
	}
}

Using it with sfeed

sfeedrc example of an existing Youtube RSS/Atom feed:

# list of feeds to fetch:
feeds() {
	# feed <name> <feedurl> [basesiteurl] [encoding]
	# normal Youtube Atom feed.
	feed "yt IM" "https://www.youtube.com/feeds/videos.xml?channel_id=UCrbvoMC0zUvPL8vjswhLOSw"
}

Use the new Atom feed directly using the CGI-mode and Atom output format:

# list of feeds to fetch:
feeds() {
	# feed <name> <feedurl> [basesiteurl] [encoding]
	# new Youtube Atom feed.
	feed "idiotbox IM" "https://codemadness.org/yt-chan/UCrbvoMC0zUvPL8vjswhLOSw.xml"
}

... or convert directly using a custom connector program on the local system via the command-line:

# fetch(name, url, feedfile)
fetch() {
	case "$1" in
	"connector example")
		youtube_feed "$2";;
	*)
		curl -L --max-redirs 0 -H "User-Agent:" -f -s -m 15 \
			"$2" 2>/dev/null;;
	esac
}

# parse and convert input, by default XML to the sfeed(5) TSV format.
# parse(name, feedurl, basesiteurl)
parse() {
	case "$1" in
	"connector example")
		cat;;
	*)
		sfeed "$3";;
	esac
}

# list of feeds to fetch:
feeds() {
	# feed <name> <feedurl> [basesiteurl] [encoding]
	feed "connector example" "UCrbvoMC0zUvPL8vjswhLOSw"
}

Screenshot using sfeed_curses

Screenshot showing the improved Youtube feed

Clone

git clone git://git.codemadness.org/frontends

Browse

You can browse the source-code at:

The program is: youtube/feed

Dependencies

  • C compiler.
  • LibreSSL + libtls.

Build and install

$ make
# make install

That's all

I hope by sharing this it is useful to someone other than me as well.

]]>
webdump HTML to plain-text converter https://www.codemadness.org/webdump.html https://www.codemadness.org/webdump.html 2023-11-20T00:00:00Z Hiltjo webdump HTML to plain-text converter

Last modification on

webdump is (yet another) HTML to plain-text converter tool.

It reads HTML in UTF-8 from stdin and writes plain-text to stdout.

Goals and scope

The main goal of this tool for me is to use it for converting HTML mails to plain-text and to convert HTML content in RSS feeds to plain-text.

The tool will only convert HTML to stdout, similarly to links -dump or lynx -dump but simpler and more secure.

  • HTML and XHTML will be supported.
  • There will be some workarounds and quirks for broken and legacy HTML code.
  • It will be usable and secure for reading HTML from mails and RSS/Atom feeds.
  • No remote resources which are part of the HTML will be downloaded: images, video, audio, etc. But these may be visible as a link reference.
  • Data will be written to stdout. Intended for plain-text or a text terminal.
  • No support for Javascript, CSS, frame rendering or form processing.
  • No HTTP or network protocol handling: HTML data is read from stdin.
  • Listings for references and some options to extract them in a list that is usable for scripting. Some references are: link anchors, images, audio, video, HTML (i)frames, etc.
  • Security: on OpenBSD it uses pledge("stdio", NULL).
  • Keep the code relatively small, simple and hackable.

Features

  • Support for word-wrapping.
  • A mode to enable basic markup: bold, underline, italic and blink ;)
  • Indentation of headers, paragraphs, pre and list items.
  • Basic support to query elements or hide them.
  • Show link references.
  • Show link references and resources such as img, video, audio, subtitles.
  • Export link references and resources to a TAB-separated format.

Usage examples

url='https://codemadness.org/sfeed.html'

curl -s "$url" | webdump -r -b "$url" | less

curl -s "$url" | webdump -8 -a -i -l -r -b "$url" | less -R

curl -s "$url" | webdump -s 'main' -8 -a -i -l -r -b "$url" | less -R

Yes, all these option flags look ugly, a shellscript wrapper could be used :)

Practical examples

To use webdump as a HTML to text filter for example in the mutt mail client, change in ~/.mailcap:

text/html; webdump -i -l -r < %s; needsterminal; copiousoutput

In mutt you should then add:

auto_view text/html

Using webdump as a HTML to text filter for sfeed_curses (otherwise the default is lynx):

SFEED_HTMLCONV="webdump -d -8 -r -i -l -a" sfeed_curses ~/.sfeed/feeds/*

Query/selector examples

The query syntax using the -s option is a bit inspired by CSS (but much more limited).

To get the title from a HTML page:

url='https://codemadness.org/sfeed.html'

title=$(curl -s "$url" | webdump -s 'title')
printf '%s\n' "$title"

List audio and video-related content from a HTML page, redirect fd 3 to fd 1 (stdout):

url="https://media.ccc.de/v/051_Recent_features_to_OpenBSD-ntpd_and_bgpd"
curl -s "$url" | webdump -x -s 'audio,video' -b "$url" 3>&1 >/dev/null | cut -f 2

Clone

git clone git://git.codemadness.org/webdump

Browse

You can browse the source-code at:

Download releases

Releases are available at:

Build and install

$ make
# make install

Dependencies

  • C compiler.
  • libc + some BSDisms.

Trade-offs

All software has trade-offs.

webdump processes HTML in a single-pass. It does not buffer the full DOM tree. Although due to the nature of HTML/XML some parts like attributes need to be buffered.

Rendering tables in webdump is very limited. Twibright Links has really nice table rendering. However implementing a similar feature in the current design of webdump would make the code much more complex. Twibright links processes a full DOM tree and processes the tables in multiple passes (to measure the table cells) etc. Of course tables can be nested also, or HTML tables that are used for creating layouts (these are mostly older webpages).

These trade-offs and preferences are chosen for now. It may change in the future. Fortunately there are the usual good suspects for HTML to plain-text conversion, each with their own chosen trade-offs of course:

]]>
Setup your own mail paste service https://www.codemadness.org/mailservice.html https://www.codemadness.org/mailservice.html 2023-10-25T00:00:00Z Hiltjo Setup your own mail paste service

Last modification on

How it works

  • The user sends a mail with an attachment to a certain mail address, for example: paste@somehost.org
  • The mail daemon configuration has an mail alias to pipe the raw mail to a shellscript.
  • This shellscript processes the raw mail contents from stdin.

What it does

  • Process a mail with the attachments automatically.
  • The script processes the attachments in the mail and stores them.
  • It will mail (back) the URL where the file(s) are stored.

This script is tested on OpenBSD using OpenBSD smtpd and OpenBSD httpd and the gopher daemon geomyidae.

Install dependencies

On OpenBSD:

pkg_add mblaze

smtpd mail configuration

In your mail aliases (for example /etc/mail/aliases) put:

paste: |/usr/local/bin/paste-mail

This pipes the mail to the script paste-mail for processing, this script is described below. Copy the below contents in /usr/local/bin/paste-mail

Script:

#!/bin/sh

d="/home/www/domains/www.codemadness.org/htdocs/mailpaste"
tmpmsg=$(mktemp)
tmpmail=$(mktemp)

cleanup() {
	rm -f "$tmpmail" "$tmpmsg"
}

# store whole mail from stdin temporarily, on exit remove temporary file.
trap "cleanup" EXIT
cat > "$tmpmail"

# mblaze: don't store mail sequence.
MAILSEQ=/dev/null
export MAILSEQ

# get from address (without display name).
from=$(maddr -a -h 'From' /dev/stdin < "$tmpmail")

# check if allowed or not.
case "$from" in
"hiltjo@codemadness.org")
	;;
*)
	exit 0;;
esac

# prevent mail loop.
if printf '%s' "$from" | grep -q "paste@"; then
	exit 0
fi

echo "Thank you for using the enterprise paste service." > "$tmpmsg"
echo "" >> "$tmpmsg"
echo "Your file(s) are available at:" >> "$tmpmsg"
echo "" >> "$tmpmsg"

# process each attachment.
mshow -n -q -t /dev/stdin < "$tmpmail" | sed -nE 's@.*name="(.*)".*@\1@p' | while read -r name; do
	test "$name" = "" && continue

	# extract attachment.
	tmpfile=$(mktemp -p "$d" XXXXXXXXXXXX)
	mshow -n -O /dev/stdin "$name" < "$tmpmail" > "$tmpfile"

	# use file extension.
	ext="${name##*/}"
	case "$ext" in
	*.tar.*)
		# special case: support .tar.gz, tar.bz2, etc.
		ext="tar.${ext##*.}";;
	*.*)
		ext="${ext##*.}";;
	*)
		ext="";;
	esac
	ext="${ext%%*.}"

	# use file extension if it is set.
	outputfile="$tmpfile"
	if test "$ext" != ""; then
		outputfile="$tmpfile.$ext"
	fi
	mv "$tmpfile" "$outputfile"
	b=$(basename "$outputfile")

	chmod 666 "$outputfile"
	url="gopher://codemadness.org/9/mailpaste/$b"

	echo "$name:" >> "$tmpmsg"
	echo "	Text   file: gopher://codemadness.org/0/mailpaste/$b" >> "$tmpmsg"
	echo "	Image  file: gopher://codemadness.org/I/mailpaste/$b" >> "$tmpmsg"
	echo "	Binary file: gopher://codemadness.org/9/mailpaste/$b" >> "$tmpmsg"
	echo "" >> "$tmpmsg"
done

echo "" >> "$tmpmsg"
echo "Sincerely," >> "$tmpmsg"
echo "Your friendly paste_bot" >> "$tmpmsg"

# mail back the user.
mail -r "$from" -s "Your files" "$from" < "$tmpmsg"

cleanup

The mail daemon processing the mail needs of course to be able to have permissions to write to the specified directory. The user who received the mail needs to be able to read it from a location they can access and have permissions for it also.

Room for improvements

This is just an example script. There is room for many improvements. Feel free to change it in any way you like.

References

Bye bye

I hope this enterprise(tm) mail service is inspirational or something ;)

]]>
A simple TODO application https://www.codemadness.org/todo-application.html https://www.codemadness.org/todo-application.html 2022-07-01T00:00:00Z Hiltjo A simple TODO application

Last modification on

This article describes a TODO application or workflow.

Workflow

It works like this:

  • Open any text editor.
  • Edit the text.
  • Save it in a file (probably named "TODO").
  • Feel happy about it.

The text format

The text format I use is this:

[indendations]<symbol><SPACE><item text><NEWLINE>

Most of the time an item is just one line. This format is just a general guideline to keep the items somewhat structured.

Symbols

Items are prefixed with a symbol.

  • - is an item which is planned to be done at some point.
  • x is an item which is done.
  • ? is an item which I'm not (yet) sure about. It can also be an idea.

I use an indendation with a TAB before an item to indicate item dependencies. The items can be nested.

For the prioritization I put the most important items and sections from the top to the bottom. These can be reshuffled as you wish of course.

To delete an item you remove the line. To archive an item you keep the line.

Sections

A section is a line which has no symbol. This is like a header to group items.

Example

Checklist for releasing project 0.1:
- Test project with different compilers and check for warnings.
- Documentation:
	- Proofread and make sure it matches all program behaviour.
	- Run mandoc -Tlint on the man pages.
	? Copy useful examples from the README file to the man page?
- Run testsuite and check for failures before release.


project 0.2:
? Investigate if feature mentioned by some user is worth adding.

Example: secure remote cloud-encrypted edit session(tm)

ssh -t host 'ed TODO'

Example: multi-user secure remote cloud-encrypted edit session(tm)

ssh host
tmux or tmux a
ed TODO

Example: version-controlled multi-user secure remote cloud-encrypted edit session(tm)

ssh host
tmux or tmux a
ed TODO
git add TODO
git commit -m 'TODO: update'

Pros

  • When you open the TODO file the most important items are at the top.
  • The items are easy to read and modify with any text editor.
  • It is easy to extend the format and use with other text tools.
  • The format is portable: it works on sticky-notes on your CRT monitor too!
  • No monthly online subscription needed and full NO-money-back guarantee.

Cons

  • Complex lists with interconnected dependencies might not work, maybe.
  • It's assumed there is one person maintaining the TODO file. Merging items from multiple people at the same time in this workflow is not recommended.
  • It is too simple: noone will be impressed by it.

I hope this is inspirational or something,

]]>
2FA TOTP without crappy authenticator apps https://www.codemadness.org/totp.html https://www.codemadness.org/totp.html 2022-03-23T00:00:00Z Hiltjo 2FA TOTP without crappy authenticator apps

Last modification on

This describes how to use 2FA without using crappy authenticator "apps" or a mobile device.

Install

On OpenBSD:

pkg_add oath-toolkit zbar

On Void Linux:

xbps-install oath-toolkit zbar

There is probably a package for your operating system.

  • oath-toolkit is used to generate the digits based on the secret key.
  • zbar is used to scan the QR barcode text from the image.

Steps

Save the QR code image from the authenticator app, website to an image file. Scan the QR code text from the image:

zbarimg image.png

An example QR code:

QR code example

The output is typically something like:

QR-Code:otpauth://totp/Example:someuser@codemadness.org?secret=SECRETKEY&issuer=Codemadness

You only need to scan this QR-code for the secret key once. Make sure to store the secret key in a private safe place and don't show it to anyone else.

Using the secret key the following command outputs a 6-digit code by default. In this example we also assume the key is base32-encoded. There can be other parameters and options, this is documented in the Yubico URI string format reference below.

Command:

oathtool --totp -b SOMEKEY
  • The --totp option uses the time-variant TOTP mode, by default it uses HMAC SHA1.
  • The -b option uses base32 encoding of KEY instead of hex.

Tip: you can create a script that automatically puts the digits in the clipboard, for example:

oathtool --totp -b SOMEKEY | xclip

References

]]>
Setup an OpenBSD RISCV64 VM in QEMU https://www.codemadness.org/openbsd-riscv64-vm.html https://www.codemadness.org/openbsd-riscv64-vm.html 2021-10-23T00:00:00Z Hiltjo Setup an OpenBSD RISCV64 VM in QEMU

Last modification on

This describes how to setup an OpenBSD RISCV64 VM in QEMU.

The shellscript below does the following:

  • Set up the disk image (raw format).
  • Patch the disk image with the OpenBSD miniroot file for the installation.
  • Downloads the opensbi and u-boot firmware files for qemu.
  • Run the VM with the supported settings.

The script is tested on the host GNU/Void Linux and OpenBSD-current.

IMPORTANT!: The signature and checksum for the miniroot, u-boot and opensbi files are not verified. If the host is OpenBSD make sure to instead install the packages (pkg_add u-boot-riscv64 opensbi) and adjust the firmware path for the qemu -bios and -kernel options.

Shellscript

#!/bin/sh
# mirror list: https://www.openbsd.org/ftp.html
mirror="https://ftp.bit.nl/pub/OpenBSD/"
release="7.0"
minirootname="miniroot70.img"

miniroot() {
	test -f "${minirootname}" && return # download once

	url="${mirror}/${release}/riscv64/${minirootname}"
	curl -o "${minirootname}" "${url}"
}

createrootdisk() {
	test -f disk.raw && return # create once
	qemu-img create disk.raw 10G # create 10 GB disk
	dd conv=notrunc if=${minirootname} of=disk.raw # write miniroot to disk
}

opensbi() {
	f="opensbi.tgz"
	test -f "${f}" && return # download and extract once.

	url="${mirror}/${release}/packages/amd64/opensbi-0.9p0.tgz"
	curl -o "${f}" "${url}"

	tar -xzf "${f}" share/opensbi/generic/fw_jump.bin
}

uboot() {
	f="uboot.tgz"
	test -f "${f}" && return # download and extract once.

	url="${mirror}/${release}/packages/amd64/u-boot-riscv64-2021.07p0.tgz"
	curl -o "${f}" "${url}"

	tar -xzf "${f}" share/u-boot/qemu-riscv64_smode/u-boot.bin
}

setup() {
	miniroot
	createrootdisk
	opensbi
	uboot
}

run() {
	qemu-system-riscv64 \
		-machine virt \
		-nographic \
		-m 2048M \
		-smp 2 \
		-bios share/opensbi/generic/fw_jump.bin \
		-kernel share/u-boot/qemu-riscv64_smode/u-boot.bin \
		-drive file=disk.raw,format=raw,id=hd0 -device virtio-blk-device,drive=hd0 \
		-netdev user,id=net0,ipv6=off -device virtio-net-device,netdev=net0
}

setup
run
]]>
Sfeed_curses: a curses UI front-end for sfeed https://www.codemadness.org/sfeed_curses-ui.html https://www.codemadness.org/sfeed_curses-ui.html 2020-06-25T00:00:00Z Hiltjo Sfeed_curses: a curses UI front-end for sfeed

Last modification on

sfeed_curses is a curses UI front-end for sfeed. It is now part of sfeed.

It shows the TAB-separated feed items in a graphical command-line UI. The interface has a look inspired by the mutt mail client. It has a sidebar panel for the feeds, a panel with a listing of the items and a small statusbar for the selected item/URL. Some functions like searching and scrolling are integrated in the interface itself.

Features

  • Relatively few LOC, about 2.5K lines of C.
  • Few dependencies: a C compiler and a curses library (typically ncurses). It also requires a terminal (emulator) which supports UTF-8.
  • Easy to customize by modifying the small source-code and shellscripts.
  • Plumb support: open the URL or an enclosure URL directly with any program.
  • Pipe support: pipe the selected Tab-Separated Value line to a program for scripting purposes. Like viewing the content in any way you like.
  • Yank support: copy the URL or an enclosure URL to the clipboard.
  • Familiar keybinds: supports both vi-like, emacs-like and arrow keys for actions.
  • Mouse support: it supports xterm X10 and extended SGR encoding.
  • Support two ways of managing read/unread items. By default sfeed_curses marks the feed items of the last day as new/bold. Alternatively a simple plain-text list with the read URLs can be used.
  • UI layouts: supports vertical, horizontal and monocle (full-screen) layouts. Useful for different kind of screen sizes.
  • Auto-execute keybind commands at startup to automate setting a preferred layout, toggle showing new items or other actions.

Like the format programs included in sfeed you can run it by giving the feed files as arguments like this:

sfeed_curses ~/.sfeed/feeds/*

... or by reading directly from stdin:

sfeed_curses < ~/.sfeed/feeds/xkcd

It will show a sidebar if one or more files are specified as parameters. It will not show the sidebar by default when reading from stdin.

Screenshot showing what the UI looks

On pressing the 'o' or ENTER keybind it will open the link URL of an item with the plumb program. On pressing the 'a', 'e' or '@' keybind it will open the enclosure URL if there is one. The default plumb program is set to xdg-open, but can be modified by setting the environment variable $SFEED_PLUMBER. The plumb program receives the URL as a command-line argument.

The TAB-Separated-Value line of the current selected item in the feed file can be piped to a program by pressing the 'c', 'p' or '|' keybind. This allows much flexibility to make a content formatter or write other custom actions or views. This line is in the exact same format as described in the sfeed(5) man page.

The pipe program can be changed by setting the environment variable $SFEED_PIPER.

Screenshot showing the output of the pipe content script

The above screenshot shows the included sfeed_content shellscript which uses the lynx text-browser to convert HTML to plain-text. It pipes the formatted plain-text to the user $PAGER (or "less").

Of course the script can be easily changed to use a different browser or HTML-to-text converter like:

It's easy to modify the color-theme by changing the macros in the source-code or set a predefined theme at compile-time. The README file contains information how to set a theme. On the left a TempleOS-like color-theme on the right a newsboat-like colorscheme.

Screenshot showing a custom colorscheme

It supports a vertical layout, horizontal and monocle (full-screen) layout. This can be useful for different kind of screen sizes. The keybinds '1', '2' and '3' can be used to switch between these layouts.

Screenshot showing the horizontal layout

Clone

git clone git://git.codemadness.org/sfeed

Browse

You can browse the source-code at:

Download releases

Releases are available at:

Build and install

$ make
# make install
]]>
hurl: HTTP, HTTPS and Gopher file grabber https://www.codemadness.org/hurl.html https://www.codemadness.org/hurl.html 2019-11-10T00:00:00Z Hiltjo hurl: HTTP, HTTPS and Gopher file grabber

Last modification on

hurl is a relatively simple HTTP, HTTPS and Gopher client/file grabber.

Why?

Sometimes (or most of the time?) you just want to fetch a file via the HTTP, HTTPS or Gopher protocol.

The focus of this tool is only this.

Features

  • Uses OpenBSD pledge(2) and unveil(2). Allow no filesystem access (writes to stdout).
  • Impose time-out and maximum size limits.
  • Use well-defined exitcodes for reliable scripting (curl sucks at this).
  • Send as little information as possible (no User-Agent etc by default).

Anti-features

  • No HTTP byte range support.
  • No HTTP User-Agent.
  • No HTTP If-Modified-Since/If-* support.
  • No HTTP auth support.
  • No HTTP/2+ support.
  • No HTTP keep-alive.
  • No HTTP chunked-encoding support.
  • No HTTP redirect support.
  • No (GZIP) compression support.
  • No cookie-jar or cookie parsing support.
  • No Gopher text handling (".\r\n").
  • ... etc...

Dependencies

  • C compiler (C99).
  • libc + some BSD functions like err() and strlcat().
  • LibreSSL(-portable)
  • libtls (part of LibreSSL).

Optional dependencies

Clone

git clone git://git.codemadness.org/hurl

Browse

You can browse the source-code at:

Download releases

Releases are available at:

Build and install

$ make
# make install

Examples

Fetch the Atom feed from this site using a maximum filesize limit of 1MB and a time-out limit of 15 seconds:

hurl -m 1048576 -t 15 "https://codemadness.org/atom.xml"

There is an -H option to add custom headers. This way some of the anti-features listed above are supported. For example some CDNs like Cloudflare are known to block empty or certain User-Agents.

User-Agent:

hurl -H 'User-Agent: some browser' 'https://codemadness.org/atom.xml'

HTTP Basic Auth (base64-encoded username:password):

hurl -H 'Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=' \
	'https://codemadness.org/atom.xml'

GZIP (this assumes the served response Content-Type is gzip):

hurl -H 'Accept-Encoding: gzip' 'https://somesite/' | gzip -d
]]>
json2tsv: a JSON to TSV converter https://www.codemadness.org/json2tsv.html https://www.codemadness.org/json2tsv.html 2019-10-13T00:00:00Z Hiltjo json2tsv: a JSON to TSV converter

Last modification on

Convert JSON to TSV or separated output.

json2tsv reads JSON data from stdin. It outputs each JSON type to a TAB- Separated Value format per line by default.

TAB-Separated Value format

The output format per line is:

nodename<TAB>type<TAB>value<LF>

Control-characters such as a newline, TAB and backslash (\n, \t and \) are escaped in the nodename and value fields. Other control-characters are removed.

The type field is a single byte and can be:

  • a for array
  • b for bool
  • n for number
  • o for object
  • s for string
  • ? for null

Filtering on the first field "nodename" is easy using awk for example.

Features

  • Accepts all valid JSON.
  • Designed to work well with existing UNIX programs like awk and grep.
  • Straightforward and not much lines of code: about 475 lines of C.
  • Few dependencies: C compiler (C99), libc.
  • No need to learn a new (meta-)language for processing data.
  • The parser supports code point decoding and UTF-16 surrogates to UTF-8.
  • It does not output control-characters to the terminal for security reasons by default (but it has a -r option if needed).
  • On OpenBSD it supports pledge(2) for syscall restriction: pledge("stdio", NULL).
  • Supports setting a different field separator and record separator with the -F and -R option.

Cons

  • For the tool there is additional overhead by processing and filtering data from stdin after parsing.
  • The parser does not do complete validation on numbers.
  • The parser accepts some bad input such as invalid UTF-8 (see RFC8259 - 8.1. Character Encoding). json2tsv reads from stdin and does not do assumptions about a "closed ecosystem" as described in the RFC.
  • The parser accepts some bad JSON input and "extensions" (see RFC8259 - 9. Parsers).
  • Encoded NUL bytes (\u0000) in strings are ignored. (see RFC8259 - 9. Parsers). "An implementation may set limits on the length and character contents of strings."
  • The parser is not the fastest possible JSON parser (but also not the slowest). For example: for ease of use, at the cost of performance all strings are decoded, even though they may be unused.

Why Yet Another JSON parser?

I wanted a tool that makes parsing JSON easier and work well from the shell, similar to jq.

sed and grep often work well enough for matching some value using some regex pattern, but it is not good enough to parse JSON correctly or to extract all information: just like parsing HTML/XML using some regex is not good (enough) or a good idea :P.

I didn't want to learn a new specific meta-language which jq has and wanted something simpler.

While it is more efficient to embed this query language for data aggregation, it is also less simple. In my opinion it is simpler to separate this and use pattern-processing by awk or an other filtering/aggregating program.

For the parser, there are many JSON parsers out there, like the efficient jsmn parser, however a few parser behaviours I want to have are:

  • jsmn buffers data as tokens, which is very efficient, but also a bit annoying as an API as it requires another layer of code to interpret the tokens.
  • jsmn does not handle decoding strings by default. Which is very efficient if you don't need parts of the data though.
  • jsmn does not keep context of nested structures by default, so may require writing custom utility functions for nested data.

This is why I went for a parser design that uses a single callback per "node" type and keeps track of the current nested structure in a single array and emits that.

Clone

git clone git://git.codemadness.org/json2tsv

Browse

You can browse the source-code at:

Download releases

Releases are available at:

Build and install

$ make
# make install

Examples

An usage example to parse posts of the JSON API of reddit.com and format them to a plain-text list using awk:

#!/bin/sh
curl -s -H 'User-Agent:' 'https://old.reddit.com/.json?raw_json=1&limit=100' | \
json2tsv | \
awk -F '\t' '
function show() {
	if (length(o["title"]) == 0)
		return;
	print n ". " o["title"] " by " o["author"] " in r/" o["subreddit"];
	print o["url"];
	print "";
}
$1 == ".data.children[].data" {
	show();
	n++;
	delete o;
}
$1 ~ /^\.data\.children\[\]\.data\.[a-zA-Z0-9_]*$/ {
	o[substr($1, 23)] = $3;
}
END {
	show();
}'

References

]]>
OpenBSD: setup a local auto-installation server https://www.codemadness.org/openbsd-autoinstall.html https://www.codemadness.org/openbsd-autoinstall.html 2019-04-24T00:00:00Z Hiltjo OpenBSD: setup a local auto-installation server

Last modification on

This guide describes how to setup a local mirror and installation/upgrade server that requires little or no input interaction.

Setup a local HTTP mirror

The HTTP mirror will be used to fetch the base sets and (optional) custom sets. In this guide we will assume 192.168.0.2 is the local installation server and mirror, the CPU architecture is amd64 and the OpenBSD release version is 6.5. We will store the files in the directory with the structure:

http://192.168.0.2/pub/OpenBSD/6.5/amd64/

Create the www serve directory and fetch all sets and install files (if needed to save space *.iso and install65.fs can be skipped):

$ cd /var/www/htdocs
$ mkdir -p pub/OpenBSD/6.5/amd64/
$ cd pub/OpenBSD/6.5/amd64/
$ ftp 'ftp://ftp.nluug.nl/pub/OpenBSD/6.5/amd64/*'

Verify signature and check some checksums:

$ signify -C -p /etc/signify/openbsd-65-base.pub -x SHA256.sig

Setup httpd(8) for simple file serving:

# $FAVORITE_EDITOR /etc/httpd.conf

A minimal example config for httpd.conf(5):

server "*" {
	listen on * port 80
}

The default www root directory is: /var/www/htdocs/

Enable the httpd daemon to start by default and start it now:

# rcctl enable httpd
# rcctl start httpd

Creating an installation response/answer file

The installer supports loading responses to the installation/upgrade questions from a simple text file. We can do a regular installation and copy the answers from the saved file to make an automated version of it.

Do a test installation, at the end of the installation or upgrade when asked the question:

Exit to (S)hell, (H)alt or (R)eboot?

Type S to go to the shell. Find the response file for an installation and copy it to some USB stick or write down the response answers:

cp /tmp/i/install.resp /mnt/usbstick/

A response file could be for example:

System hostname = testvm
Which network interface do you wish to configure = em0
IPv4 address for em0 = dhcp
IPv6 address for em0 = none
Which network interface do you wish to configure = done
Password for root account = $2b$10$IqI43aXjgD55Q3nLbRakRO/UAG6SAClL9pyk0vIUpHZSAcLx8fWk.
Password for user testuser = $2b$10$IqI43aXjgD55Q3nLbRakRO/UAG6SAClL9pyk0vIUpHZSAcLx8fWk.
Start sshd(8) by default = no
Do you expect to run the X Window System = no
Setup a user = testuser
Full name for user testuser = testuser
What timezone are you in = Europe/Amsterdam
Which disk is the root disk = wd0
Use (W)hole disk MBR, whole disk (G)PT, (O)penBSD area or (E)dit = OpenBSD
Use (A)uto layout, (E)dit auto layout, or create (C)ustom layout = a
Location of sets = http
HTTP proxy URL = none
HTTP Server = 192.168.0.2
Server directory = pub/OpenBSD/6.5/amd64
Unable to connect using https. Use http instead = yes
Location of sets = http
Set name(s) = done
Location of sets = done
Exit to (S)hell, (H)alt or (R)eboot = R

Get custom encrypted password for response file:

$ printf '%s' 'yourpassword' | encrypt

Changing the RAMDISK kernel disk image

rdsetroot(8) is publicly exposed now in base since 6.5. Before 6.5 it is available in the /usr/src/ tree as elfrdsetroot, see also the rd(4) man page.

$ mkdir auto
$ cd auto
$ cp pubdir/bsd.rd .
$ rdsetroot -x bsd.rd disk.fs
# vnconfig vnd0 disk.fs
# mkdir mount
# mount /dev/vnd0a mount

Copy the response file (install.resp) to: mount/auto_install.conf (installation) or mount/auto_upgrade.conf (upgrade), but not both. In this guide we will do an auto-installation.

Unmount, detach and patch RAMDISK:

# umount mount
# vnconfig -u vnd0
$ rdsetroot bsd.rd disk.fs

To test copy bsd.rd to the root of some testmachine like /bsd.test.rd then (re)boot and type:

boot /bsd.test.rd

In the future (6.5+) it will be possible to copy to a file named "/bsd.upgrade" in the root of a current system and automatically load the kernel: See the script bsd.upgrade in CVS. Of course this is possible with PXE boot or some custom USB/ISO also. As explained in the autoinstall(8) man page: create either an auto_upgrade.conf or an auto_install.conf, but not both.

Create bootable miniroot

In this example the miniroot will boot the custom kernel, but fetch all the sets from the local network.

We will base our miniroot of the official version: miniroot65.fs.

We will create a 16MB miniroot to boot from (in this guide it is assumed the original miniroot is about 4MB and the modified kernel image fits in the new allocated space):

$ dd if=/dev/zero of=new.fs bs=512 count=32768

Copy first part of the original image to the new disk (no truncation):

$ dd conv=notrunc if=miniroot65.fs of=new.fs
# vnconfig vnd0 new.fs

Expand disk OpenBSD boundaries:

# disklabel -E vnd0
> b
Starting sector: [1024]
Size ('*' for entire disk): [8576] *
> r
Total free sectors: 1168.
> c a
Partition a is currently 8576 sectors in size, and can have a maximum
size of 9744 sectors.
size: [8576] *
> w
> q

or:

# printf 'b\n\n*\nc a\n*\nw\n' | disklabel -E vnd0

Grow filesystem and check it and mark as clean:

# growfs -y /dev/vnd0a
# fsck -y /dev/vnd0a

Mount filesystem:

# mount /dev/vnd0a mount/

The kernel on the miniroot is GZIP compressed. Compress our modified bsd.rd and overwrite the original kernel:

# gzip -c9n bsd.rd > mount/bsd

Or to save space (+- 500KB) by stripping debug symbols, taken from bsd.gz target in this Makefile.

$ cp bsd.rd bsd.strip
$ strip bsd.strip
$ strip -R .comment -R .SUNW_ctf bsd.strip
$ gzip -c9n bsd.strip > bsd.gz
$ cp bsd.gz mount/bsd

Now unmount and detach:

# umount mount/
# vnconfig -u vnd0

Now you can dd(1) the image new.fs to your bootable (USB) medium.

Adding custom sets (optional)

For patching /etc/rc.firsttime and other system files it is useful to use a customized installation set like siteVERSION.tgz, for example: site65.tgz. The sets can even be specified per host/MAC address like siteVERSION-$(hostname -s).tgz so for example: site65-testvm.tgz

When the installer checks the base sets of the mirror it looks for a file index.txt. To add custom sets the site entries have to be added.

For example:

-rw-r--r--  1 1001  0    4538975 Oct 11 13:58:26 2018 site65-testvm.tgz

The filesize, permissions etc do not matter and are not checked by the installer. Only the filename is matched by a regular expression.

Sign custom site* tarball sets (optional)

If you have custom sets without creating a signed custom release you will be prompted for the messages:

checksum test failed

and:

unverified sets: continue without verification

OpenBSD uses the program signify(1) to cryptographically sign and verify filesets.

To create a custom public/private keypair (ofcourse make sure to store the private key privately):

$ signify -G -n -c "Custom 6.5 install" -p custom-65-base.pub -s custom-65-base.sec

Create new checksum file with filelist of the current directory (except SHA256* files):

$ printf '%s\n' * | grep -v SHA256 | xargs sha256 > SHA256

Sign SHA256 and store as SHA256.sig, embed signature:

$ signify -S -e -s /privatedir/custom-65-base.sec -m SHA256 -x SHA256.sig

Verify the created signature and data is correct:

$ signify -C -p /somelocation/custom-65-base.pub -x SHA256.sig

Copy only the public key to the RAMDISK:

$ cp custom-65-base.pub mount/etc/signify/custom-65-base.pub

Now we have to patch the install.sub file to check our public key. If you know a better way without having to patch this script, please let me know.

Change the variable PUB_KEY in the shellscript mount/install.sub from:

PUB_KEY=/etc/signify/openbsd-${VERSION}-base.pub

To:

PUB_KEY=/etc/signify/custom-${VERSION}-base.pub

And for upgrades from:

$UPGRADE_BSDRD &&
	PUB_KEY=/mnt/etc/signify/openbsd-$((VERSION + 1))-base.pub

To:

$UPGRADE_BSDRD &&
	PUB_KEY=/mnt/etc/signify/custom-$((VERSION + 1))-base.pub

Ideas

  • Patch rc.firsttime(8): and run syspatch, add ports, setup xenodm etc.
  • Custom partitioning scheme, see autoinstall(8) "URL to autopartitioning template for disklabel = url".
  • Setup pxeboot(8) to boot and install over the network using dhcpd(8) and tftpd(8) then not even some USB stick is required.

References

]]>
Idiotbox: Youtube interface https://www.codemadness.org/idiotbox.html https://www.codemadness.org/idiotbox.html 2019-02-10T00:00:00Z Hiltjo Idiotbox: Youtube interface

Last modification on

Idiotbox is a less resource-heavy Youtube interface. For viewing videos it is recommended to use it with mpv or mplayer with youtube-dl or yt-dlp.

For more (up-to-date) information see the README file.

Why

In my opinion the standard Youtube web interface is:

  • Non-intuitive, too much visual crap.
  • Too resource-hungry, both in CPU and bandwidth.
  • Doesn't work well on simpler (text-based) browsers such as netsurf and links.

Features

  • Doesn't use JavaScript.
  • Doesn't use (tracking) cookies.
  • CSS is optional.
  • Multiple interfaces available: HTTP CGI, command-line, Gopher CGI (gph), this is a work-in-progress.
  • Doesn't use or require the Google API.
  • CGI interface works nice in most browsers, including text-based ones.
  • On OpenBSD it runs "sandboxed" and it can be compiled as a static-linked binary with pledge(2), unveil(2) in a chroot.

Cons

  • Order by upload date is incorrect (same as on Youtube).
  • Some Youtube features are not supported.
  • Uses scraping so might break at any point.

Clone

git clone git://git.codemadness.org/frontends

Browse

You can browse the source-code at:

Download releases

Releases are available at:

View

You can view it here: https://codemadness.org/idiotbox/

For example you can search using the query string parameter "q": https://codemadness.org/idiotbox/?q=gunther+tralala

The gopher version is here: gopher://codemadness.org/7/idiotbox.cgi

]]>
Gopher HTTP proxy https://www.codemadness.org/gopher-proxy.html https://www.codemadness.org/gopher-proxy.html 2018-08-17T00:00:00Z Hiltjo Gopher HTTP proxy

Last modification on

For fun I wrote a small HTTP Gopher proxy CGI program in C. It only supports the basic Gopher types and has some restrictions to prevent some abuse.

For your regular Gopher browsing I recommend the simple Gopher client sacc.

For more information about Gopher check out gopherproject.org.

Clone

git clone git://git.codemadness.org/gopherproxy-c

Browse

You can browse the source-code at:

View

You can view it here: https://codemadness.org/gopherproxy/

For example you can also view my gopherhole using the proxy, the query string parameter "q" reads the URI: https://codemadness.org/gopherproxy/?q=codemadness.org

Due to abuse this service is (temporary) disabled, but of course you can self-host it

For authors writing crawler bots: please respect robots.txt, HTTP status codes and test your code properly

]]>
Setup your own file paste service https://www.codemadness.org/paste-service.html https://www.codemadness.org/paste-service.html 2018-03-10T00:00:00Z Hiltjo Setup your own file paste service

Last modification on

Setup SSH authentication

Make sure to setup SSH public key authentication so you don't need to enter a password each time and have a more secure authentication.

For example in the file $HOME/.ssh/config:

Host codemadness
	Hostname codemadness.org
	Port 22
	IdentityFile ~/.ssh/codemadness/id_rsa

Of course also make sure to generate the private and public keys.

Shell alias

Make an alias or function in your shell config:

pastesrv() {
	ssh user@codemadness "cat > /your/www/publicdir/paste/$1"
	echo "https://codemadness.org/paste/$1"
}

This function reads any data from stdin and transfers the output securely via SSH and writes it to a file at the specified path. This path can be visible via HTTP, gopher or an other protocol. Then it writes the absolute URL to stdout, this URL can be copied to the clipboard and pasted anywhere like to an e-mail, IRC etc.

Usage and examples

To use it, here are some examples:

Create a patch of the last commit in the git repo and store it:

git format-patch --stdout HEAD^ | pastesrv 'somepatch.diff'

Create a screenshot of your current desktop and paste it:

xscreenshot | ff2png | pastesrv 'screenshot.png'

There are many other uses of course, use your imagination :)

]]>
Setup your own git hosting service https://www.codemadness.org/setup-git-hosting.html https://www.codemadness.org/setup-git-hosting.html 2018-02-25T00:00:00Z Hiltjo Setup your own git hosting service

Last modification on

This article assumes you use OpenBSD for the service files and OS-specific examples.

Why

A good reason to host your own git repositories is because of having and keeping control over your own computing infrastructure.

Some bad examples:

The same thing can happen with Github, Atlassian Bitbucket or other similar services. After all: they are just a company with commercial interests. These online services also have different pricing plans and various (arbitrary) restrictions. When you host it yourself the restrictions are the resource limits of the system and your connection, therefore it is a much more flexible solution.

Always make sure you own the software (which is Free or open-source) and you can host it yourself, so you will be in control of it.

Creating repositories

For the hosting it is recommended to use a so-called "bare" repository. A bare repository means no files are checked out in the folder itself. To create a bare repository use git init with the --bare argument:

$ git init --bare

I recommend to create a separate user and group for the source-code repositories. In the examples we will assume the user is called "src".

Login as the src user and create the files. To create a directory for the repos, in this example /home/src/src:

$ mkdir -p /home/src/src
$ cd /home/src/src
$ git init --bare someproject
$ $EDITOR someproject/description

Make sure the git-daemon process has access permissions to these repositories.

Install git-daemon (optional)

Using git-daemon you can clone the repositories publicly using the efficient git:// protocol. An alternative without having to use git-daemon is by using (anonymous) SSH, HTTPS or any public shared filesystem.

When you use a private-only repository I recommend to just use SSH without git-daemon because it is secure.

Install the git package. The package should contain "git daemon":

# pkg_add git

Enable the daemon:

# rcctl enable gitdaemon

Set the gitdaemon service flags to use the src directory and use all the available repositories in this directory. The command-line flags "--export-all" exports all repositories in the base path. Alternatively you can use the "git-daemon-export-ok" file (see the git-daemon man page).

# rcctl set gitdaemon flags --export-all --base-path="/home/src/src"

To configure the service to run as the user _gitdaemon:

# rcctl set gitdaemon user _gitdaemon

To run the daemon directly as the user _gitdaemon (without dropping privileges from root to the user) set the following flags in /etc/rc.d/gitdaemon:

daemon_flags="--user=_gitdaemon"

Which will also avoid this warning while cloning:

"can't access /root/.git/config"

Now start the daemon:

# rcctl start gitdaemon

Cloning and fetching changes

To test and clone the repository do:

$ git clone git://yourdomain/someproject

if you skipped the optional git-daemon installation then just clone via SSH:

$ git clone ssh://youraccount@yourdomain:/home/src/src/someproject

When cloning via SSH make sure to setup private/public key authentication for security and convenience.

You should also make sure the firewall allows connections to the services like the git daemon, HTTPd or SSH, for example using OpenBSD pf something like this can be set in /etc/pf.conf:

tcp_services="{ ssh, gopher, http, https, git }"
pass in on egress proto tcp from any to (egress) port $tcp_services

Pushing changes

Add the repository as a remote:

$ git remote add myremote ssh://youraccount@yourdomain:/home/src/src/someproject

Then push the changes:

$ git push myremote master:master

Git history web browsing (optional)

Sometimes it's nice to browse the git history log of the repository in a web browser or some other program without having to look at the local repository.

It's also possible with these tools to generate an Atom feed and then use a RSS/Atom reader to track the git history:

My sfeed program can be used as a RSS/Atom reader.

Setting up git hooks (optional)

Using git hooks you can setup automated triggers, for example when pushing to a repository. Some useful examples can be:

]]>
Setup an OpenBSD SPARC64 VM in QEMU https://www.codemadness.org/openbsd-sparc64-vm.html https://www.codemadness.org/openbsd-sparc64-vm.html 2017-12-11T00:00:00Z Hiltjo Setup an OpenBSD SPARC64 VM in QEMU

Last modification on

This describes how to setup an OpenBSD SPARC64 VM in QEMU.

Create a disk image

To create a 5GB disk image:

qemu-img create -f qcow2 fs.qcow2 5G

Install

In this guide we'll use the installation ISO to install OpenBSD. Make sure to download the latest (stable) OpenBSD ISO, for example install62.iso.

  • Change -boot c to -boot d to boot from the CD-ROM and do a clean install.
  • Change -cdrom install62.iso to the location of your ISO file.
  • When the install is done type: halt -p
  • Change -boot d back to -boot c.

Start the VM:

#!/bin/sh
LC_ALL=C QEMU_AUDIO_DRV=none \
qemu-system-sparc64 \
	-machine sun4u,usb=off \
	-realtime mlock=off \
	-smp 1,sockets=1,cores=1,threads=1 \
	-rtc base=utc \
	-m 1024 \
	-boot c \
	-drive file=fs.qcow2,if=none,id=drive-ide0-0-1,format=qcow2,cache=none \
	-cdrom install62.iso \
	-device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-1,id=ide0-0-1 \
	-msg timestamp=on \
	-serial pty -nographic \
	-net nic,model=ne2k_pci -net user

The VM has the following properties:

  • No audio.
  • No USB.
  • No VGA graphics: serial console.
  • Netdev is ne0 (Realtek 8029).
  • 1024MB memory.

From your host connect to the serial device indicated by QEMU, for example:

(qemu) 2017-11-19T15:14:20.884312Z qemu-system-sparc64: -serial pty: char device redirected to /dev/ttyp0 (label serial0)

Then you can use the serial terminal emulator cu to attach:

cu -l /dev/ttyp0

Another option could be using the simple terminal(st) from suckless.

st -l /dev/ttyp0

using cu to detach the cu(1) man page says:

Typed characters are normally transmitted directly to the remote machine (which
does the echoing as well).  A tilde ('~') appearing as the first character of a
line is an escape signal; the following are recognized:

    ~^D or ~.  Drop the connection and exit.  Only the connection is
               the login session is not terminated.

On boot you have to type:

root device: wd0a
for swap use the default (wd0b) Press enter

Initial settings on first boot (optional)

Automatic network configuration using DHCP

echo "dhcp" > /etc/hostname.ne0

To bring up the interface (will be automatic on the next boot):

sh /etc/netstart

Add a mirror to /etc/installurl for package installation. Make sure to lookup the most efficient/nearby mirror site on the OpenBSD mirror page.

echo "https://ftp.hostserver.de/pub/OpenBSD" > /etc/installurl
]]>
Tscrape: a Twitter scraper https://www.codemadness.org/tscrape.html https://www.codemadness.org/tscrape.html 2017-09-24T00:00:00Z Hiltjo Tscrape: a Twitter scraper

Last modification on

Tscrape is a Twitter web scraper and archiver.

Twitter removed the functionality to follow users using a RSS feed without authenticating or using their API. With this program you can format tweets in any way you like relatively anonymously.

For more (up-to-date) information see the README file.

Clone

git clone git://git.codemadness.org/tscrape

Browse

You can browse the source-code at:

Download releases

Releases are available at:

Examples

Output format examples:

]]>
jsdatatable: a small datatable Javascript https://www.codemadness.org/datatable.html https://www.codemadness.org/datatable.html 2017-09-24T00:00:00Z Hiltjo jsdatatable: a small datatable Javascript

Last modification on

This is a small datatable Javascript with no dependencies.

Features

  • Small:
    • Filesize: +- 9.1KB.
    • Lines: +- 300, not much code, so hopefully easy to understand.
    • No dependencies on other libraries like jQuery.
  • Sorting on columns, multi-column support with shift-click.
  • Filtering values: case-insensitively, tokenized (separated by space).
  • Able to add custom filtering, parsing and sorting functions.
  • Helper function for delayed (150ms) filtering, so filtering feels more responsive for big datasets.
  • Permissive ISC license, see LICENSE file.
  • "Lazy scroll" mode:
    • fixed column headers and renders only visible rows, this allows you to "lazily" render millions of rows.
  • Officially supported browsers are:
    • Firefox and Firefox ESR.
    • Chrome and most recent webkit-based browsers.
    • IE10+.

Why? and a comparison

It was created because all the other datatable scripts suck balls.

Most Javascripts nowadays have a default dependency on jQuery, Bootstrap or other frameworks.

jQuery adds about 97KB and Bootstrap adds about 100KB to your scripts and CSS as a dependency. This increases the CPU, memory and bandwidth consumption and latency. It also adds complexity to your scripts.

jQuery was mostly used for backwards-compatibility in the Internet Explorer days, but is most often not needed anymore. It contains functionality to query the DOM using CSS-like selectors, but this is now supported with for example document.querySelectorAll. Functionality like a JSON parser is standard available now: JSON.parse().

Size comparison

All sizes are not "minified" or gzipped.

Name                             |   Total |      JS |   CSS | Images | jQuery
---------------------------------+---------+---------+-------+--------+-------
jsdatatable                      |  12.9KB |   9.1KB | 2.5KB |  1.3KB |      -
datatables.net (without plugins) | 563.4KB | 449.3KB |  16KB |  0.8KB | 97.3KB
jdatatable                       | 154.6KB |    53KB |   1KB |  3.3KB | 97.3KB

Of course jsdatatable has less features (less is more!), but it does 90% of what's needed. Because it is so small it is also much simpler to understand and extend with required features if needed.

See also: The website obesity crisis

Clone

git clone git://git.codemadness.org/jscancer

Browse

You can browse the source-code at:

It is in the datatable directory.

Download releases

Releases are available at:

Usage

Examples

See example.html for an example. A stylesheet file datatable.css is also included, it contains the icons as embedded images.

A table should have the classname "datatable" set, it must contain a <thead> for the column headers (<td> or <th>) and <tbody> element for the data. The minimal code needed for a working datatable:

<html>
<body>
<input class="filter-text" /><!-- optional -->
<table class="datatable">
	<thead><!-- columns -->
		<tr><td>Click me</td></tr>
	</thead>
	<tbody><!-- data -->
		<tr><td>a</td></tr>
		<tr><td>b</td></tr>
	</tbody>
</table>
<script type="text/javascript" src="datatable.js"></script>
<script type="text/javascript">var datatables = datatable_autoload();</script>
</body>
</html>

Column attributes

The following column attributes are supported:

  • data-filterable: if "1" or "true" specifies if the column can be filtered, default: "true".
  • data-parse: specifies how to parse the values, default: "string", which is datatable_parse_string(). See PARSING section below.
  • data-sort: specifies how to sort the values: default: "default", which is datatable_sort_default(). See SORTING section below.
  • data-sortable: if "1" or "true" specifies if the column can be sorted, default: "true".

Parsing

By default only parsing for the types: date, float, int and string are supported, but other types can be easily added as a function with the name: datatable_parse_<typename>(). The parse functions parse the data-value attribute when set or else the cell content (in order). Because of this behaviour you can set the actual values as the data-value attribute and use the cell content for display. This is useful to display and properly sort locale-aware currency, datetimes etc.

Filtering

Filtering will be done case-insensitively on the cell content and when set also on the data-value attribute. The filter string is split up as tokens separated by space. Each token must match at least once per row to display it.

Sorting

Sorting is done on the parsed values by default with the function: datatable_sort_default(). To change this you can set a customname string on the data-sort attribute on the column which translates to the function: datatable_sort_<customname>().

In some applications locale values are used, like for currency, decimal numbers datetimes. Some people also like to use icons or extended HTML elements inside the cell. Because jsdatatable sorts on the parsed value (see section PARSING) it is possible to sort on the data-value attribute values and use the cell content for display.

For example:

  • currency, decimal numbers: use data-value attribute with floating-point number, set data-parse column to "float".
  • date/datetimes: use data-value attribute with UNIX timestamps (type int), set data-parse on column to "int" or set the data-parse attribute on column to "date" which is datatable_parse_date(), then make sure to use Zulu times, like: "2016-01-01T01:02:03Z" or other time strings that are parsable as the data-value attribute.
  • icons: generally use data-value attribute with integer as weight value to sort on, set data-parse column to "int".

Dynamically update data

To update data dynamically see example-ajax.html for an example how to do this.

Caveats

  • A date, integer, float or other values must be able to parse properly, when the parse function returns NaN, null or undefined etc. the sorting behaviour is also undefined. It is recommended to always set a zero value for each type.
  • <tfoot> is not supported in datatables in "lazy" mode.

Demo / example

For the below example to work you need to have Javascript enabled.

datatable-example.html

]]>
Stagit-gopher: a static git page generator for gopher https://www.codemadness.org/stagit-gopher.html https://www.codemadness.org/stagit-gopher.html 2017-08-04T00:00:00Z Hiltjo Stagit-gopher: a static git page generator for gopher

Last modification on

stagit-gopher is a static page generator for Gopher. It creates the pages as static geomyidae .gph files. stagit-gopher is a modified version from the HTML version of stagit.

Read the README for more information about it.

I also run a gopherhole and stagit-gopher, you can see how it looks here: gopher://codemadness.org/1/git/

sacc is a good Gopher client to view it.

Features

  • Log of all commits from HEAD.
  • Log and diffstat per commit.
  • Show file tree with line numbers.
  • Show references: local branches and tags.
  • Detect README and LICENSE file from HEAD and link it as a webpage.
  • Detect submodules (.gitmodules file) from HEAD and link it as a webpage.
  • Atom feed of the commit log (atom.xml).
  • Atom feed of the tags/refs (tags.xml).
  • Make index page for multiple repositories with stagit-gopher-index.
  • After generating the pages (relatively slow) serving the files is very fast, simple and requires little resources (because the content is static), a geomyidae Gopher server is required.
  • Security: all pages are static. No CGI or dynamic code is run for the interface. Using it with a secure Gopher server such as geomyidae it is privilege-dropped and chroot(2)'d.
  • Simple to setup: the content generation is clearly separated from serving it. This makes configuration as simple as copying a few directories and scripts.
  • Usable with Gopher clients such as lynx and sacc.

Cons

  • Not suitable for large repositories (2000+ commits), because diffstats are an expensive operation, the cache (-c flag) is a workaround for this in some cases.
  • Not suitable for large repositories with many files, because all files are written for each execution of stagit. This is because stagit shows the lines of textfiles and there is no "cache" for file metadata (this would add more complexity to the code).
  • Not suitable for repositories with many branches, a quite linear history is assumed (from HEAD).
  • Relatively slow to run the first time (about 3 seconds for sbase, 1500+ commits), incremental updates are faster.
  • Does not support some of the dynamic features cgit has (for HTTP), like:
    • Snapshot tarballs per commit.
    • File tree per commit.
    • History log of branches diverged from HEAD.
    • Stats (git shortlog -s).

This is by design, just use git locally.

Clone

git clone git://git.codemadness.org/stagit-gopher

Browse

You can browse the source-code at:

Download releases

Releases are available at:

]]>
Saait: a boring HTML page generator https://www.codemadness.org/saait.html https://www.codemadness.org/saait.html 2017-06-10T00:00:00Z Hiltjo Saait: a boring HTML page generator

Last modification on

Saait is the most boring static HTML page generator.

Meaning of saai (dutch): boring. Pronunciation: site

Read the README for more information about it.

I used to use shellscripts to generate the static pages, but realised I wanted a small program that works on each platform consistently. There are many incompatibilities or unimplemented features in base tools across different platforms: Linux, UNIX, Windows.

This site is created using saait.

Features

  • Single small binary that handles all the things. At run-time no dependency on other tools.
  • Few lines of code (about 575 lines of C) and no dependencies except: a C compiler and libc.
  • Works on most platforms: tested on Linux, *BSD, Windows.
  • Simple template syntax.
  • Uses HTML output by default, but can easily be modified to generate any textual content, like gopher pages, wiki pages or other kinds of documents.
  • Out-of-the-box supports: creating an index page of all pages, Atom feed, twtxt.txt feed, sitemap.xml and urllist.txt.

Cons

  • Simple template syntax, but very basic. Requires C knowledge to extend it if needed.
  • Only basic (no nested) template blocks supported.

Clone

git clone git://git.codemadness.org/saait

Browse

You can browse the source-code at:

Download releases

Releases are available at:

Documentation / man page

Below is the saait(1) man page, which includes usage examples.


SAAIT(1)                    General Commands Manual                      SAAIT(1)

NAME
     saait  the most boring static page generator

SYNOPSIS
     saait [-c configfile] [-o outputdir] [-t templatesdir] pages...

DESCRIPTION
     saait writes HTML pages to the output directory.

     The arguments pages are page config files, which are processed in the
     given order.

     The options are as follows:

     -c configfile
             The global configuration file, the default is "config.cfg". Each
             page configuration file inherits variables from this file. These
             variables can be overwritten per page.

     -o outputdir
             The output directory, the default is "output".

     -t templatesdir
             The templates directory, the default is "templates".

DIRECTORY AND FILE STRUCTURE
     A recommended directory structure for pages, although the names can be
     anything:
     pages/001-page.cfg
     pages/001-page.html
     pages/002-page.cfg
     pages/002-page.html

     The directory and file structure for templates must be:
     templates/<templatename>/header.ext
     templates/<templatename>/item.ext
     templates/<templatename>/footer.ext

     The following filename prefixes are detected for template blocks and
     processed in this order:

     "header."
             Header block.

     "item."
             Item block.

     "footer."
             Footer block.

     The files are saved as output/<templatename>, for example
     templates/atom.xml/* will become: output/atom.xml. If a template block
     file does not exist then it is treated as if it was empty.

     Template directories starting with a dot (".") are ignored.

     The "page" templatename is special and will be used per page.

CONFIG FILE
     A config file has a simple key=value configuration syntax, for example:

     # this is a comment line.
     filename = example.html
     title = Example page
     description = This is an example page
     created = 2009-04-12
     updated = 2009-04-14

     The following variable names are special with their respective defaults:

     contentfile
             Path to the input content filename, by default this is the path
             of the config file with the last extension replaced to ".html".

     filename
             The filename or relative file path for the output file for this
             page.  By default the value is the basename of the contentfile.
             The path of the written output file is the value of filename
             appended to the outputdir path.

     A line starting with # is a comment and is ignored.

     TABs and spaces before and after a variable name are ignored.  TABs and
     spaces before a value are ignored.

TEMPLATES
     A template (block) is text.  Variables are replaced with the values set
     in the config files.

     The possible operators for variables are:

     $             Escapes a XML string, for example: < to the entity &lt;.

     #             Literal raw string value.

     %             Insert contents of file of the value of the variable.

     For example in a HTML item template:

     <article>
             <header>
                     <h1><a href="">${title}</a></h1>
                     <p>
                             <strong>Last modification on </strong>
                             <time datetime="${updated}">${updated}</time>
                     </p>
             </header>
             %{contentfile}
     </article>

EXIT STATUS
     The saait utility exits 0 on success, and >0 if an error occurs.

EXAMPLES
     A basic usage example:

     1.   Create a directory for a new site:

          mkdir newsite

     2.   Copy the example pages, templates, global config file and example
          stylesheets to a directory:

          cp -r pages templates config.cfg style.css print.css newsite/

     3.   Change the current directory to the created directory.

          cd newsite/

     4.   Change the values in the global config.cfg file.

     5.   If you want to modify parts of the header, like the navigation menu
          items, you can change the following two template files:
          templates/page/header.html
          templates/index.html/header.html

     6.   Create any new pages in the pages directory. For each config file
          there has to be a corresponding HTML file.  By default this HTML
          file has the path of the config file, but with the last extension
          (".cfg" in this case) replaced to ".html".

     7.   Create an output directory:

          mkdir -p output

     8.   After any modifications the following commands can be used to
          generate the output and process the pages in descending order:

          find pages -type f -name '*.cfg' -print0 | sort -zr | xargs -0 saait

     9.   Copy the modified stylesheets to the output directory also:

          cp style.css print.css output/

     10.  Open output/index.html locally in your webbrowser to review the
          changes.

     11.  To synchronize files, you can securely transfer them via SSH using
          rsync:

          rsync -av output/ user@somehost:/var/www/htdocs/

TRIVIA
     The most boring static page generator.

     Meaning of saai (dutch): boring, pronunciation of saait: site

SEE ALSO
     find(1), sort(1), xargs(1)

AUTHORS
     Hiltjo Posthuma <hiltjo@codemadness.org>
]]>
Stagit: a static git page generator https://www.codemadness.org/stagit.html https://www.codemadness.org/stagit.html 2017-05-10T00:00:00Z Hiltjo Stagit: a static git page generator

Last modification on

stagit is a static page generator for git.

Read the README for more information about it.

My git repository uses stagit, you can see how it looks here: https://codemadness.org/git/

Features

  • Log of all commits from HEAD.
  • Log and diffstat per commit.
  • Show file tree with linkable line numbers.
  • Show references: local branches and tags.
  • Detect README and LICENSE file from HEAD and link it as a webpage.
  • Detect submodules (.gitmodules file) from HEAD and link it as a webpage.
  • Atom feed of the commit log (atom.xml).
  • Atom feed of the tags/refs (tags.xml).
  • Make index page for multiple repositories with stagit-index.
  • After generating the pages (relatively slow) serving the files is very fast, simple and requires little resources (because the content is static), only a HTTP file server is required.
  • Security: all pages are static. No CGI or dynamic code is run for the interface. Using it with a secure httpd such as OpenBSD httpd it is privilege-separated, chroot(2)'d and pledge(2)'d.
  • Simple to setup: the content generation is clearly separated from serving it. This makes configuration as simple as copying a few directories and scripts.
  • Usable with text-browsers such as dillo, links, lynx and w3m.

Cons

  • Not suitable for large repositories (2000+ commits), because diffstats are an expensive operation, the cache (-c flag) or (-l maxlimit) is a workaround for this in some cases.
  • Not suitable for large repositories with many files, because all files are written for each execution of stagit. This is because stagit shows the lines of textfiles and there is no "cache" for file metadata (this would add more complexity to the code).
  • Not suitable for repositories with many branches, a quite linear history is assumed (from HEAD).

In these cases it is better to use cgit or possibly change stagit to run as a CGI program.

  • Relatively slow to run the first time (about 3 seconds for sbase, 1500+ commits), incremental updates are faster.
  • Does not support some of the dynamic features cgit has, like:
    • Snapshot tarballs per commit.
    • File tree per commit.
    • History log of branches diverged from HEAD.
    • Stats (git shortlog -s).

This is by design, just use git locally.

Clone

git clone git://git.codemadness.org/stagit

Browse

You can browse the source-code at:

Download releases

Releases are available at:

]]>
OpenBSD httpd, slowcgi and cgit https://www.codemadness.org/openbsd-httpd-and-cgit.html https://www.codemadness.org/openbsd-httpd-and-cgit.html 2015-07-05T00:00:00Z Hiltjo OpenBSD httpd, slowcgi and cgit

Last modification on

This is a guide to get cgit working with OpenBSD httpd(8) and slowcgi(8) in base. OpenBSD httpd is very simple to setup, but nevertheless this guide might help someone out there.

Installation

Install the cgit package:

# pkg_add cgit

or build it from ports:

# cd /usr/ports/www/cgit && make && make install

Configuration

httpd

An example of httpd.conf(5): httpd.conf.

slowcgi

By default the slowcgi UNIX domain socket is located at: /var/www/run/slowcgi.sock. For this example we use the defaults.

cgit

The cgit binary should be located at: /var/www/cgi-bin/cgit.cgi (default).

cgit uses the $CGIT_CONFIG environment variable to locate its config. By default on OpenBSD this is set to /conf/cgitrc (chroot), which is /var/www/conf/cgitrc. An example of the cgitrc file is here: cgitrc.

In this example the cgit cache directory is set to /cgit/cache (chroot), which is /var/www/cgit/cache. Make sure to give this path read and write permissions for cgit (www:daemon).

In the example the repository path (scan-path) is set to /htdocs/src (chroot), which is /var/www/htdocs/src.

The footer file is set to /conf/cgit.footer. Make sure this file exists or you will get warnings:

# >/var/www/conf/cgit.footer

Make sure cgit.css (stylesheet) and cgit.png (logo) are accessible, by default: /var/www/cgit/cgit.{css,png} (location can be changed in httpd.conf).

To support .tar.gz snapshots a static gzip binary is required in the chroot /bin directory:

cd /usr/src/usr.bin/compress
make clean && make LDFLAGS="-static -pie"
cp obj/compress /var/www/bin/gzip

Running the services

Enable the httpd and slowcgi services to automatically start them at boot:

# rcctl enable httpd slowcgi

Start the services:

# rcctl start httpd slowcgi
]]>
twitch: application to watch Twitch streams https://www.codemadness.org/twitch-interface.html https://www.codemadness.org/twitch-interface.html 2014-11-23T00:00:00Z Hiltjo twitch: application to watch Twitch streams

Last modification on

Update: as of 2020-05-06: I stopped maintaining it. Twitch now requires OAUTH and 2-factor authentication. It requires me to expose personal information such as my phone number.

Update: as of ~2020-01-03: I rewrote this application from Golang to C. The Twitch Kraken API used by the Golang version was deprecated. It was rewritten to use the Helix API.

This program/script allows to view streams in your own video player like so the bloated Twitch interface is not needed. It is written in C.

Features

  • No Javascript, cookies, CSS optional.
  • Works well in all browsers, including text-based ones.
  • Has a HTTP CGI and Gopher CGI version.
  • Atom feed for VODs.

Clone

git clone git://git.codemadness.org/frontends

Browse

You can browse the source-code at:

]]>
Userscript: focus input field https://www.codemadness.org/userscript-focus-input-field.html https://www.codemadness.org/userscript-focus-input-field.html 2014-03-02T00:00:00Z Hiltjo Userscript: focus input field

Last modification on

This is an userscript I wrote a while ago which allows to focus the first input field on a page with ctrl+space. This is useful if a site doesn't specify the autofocus attribute for an input field and you don't want to switch to it using the mouse.

Download

Download userscript input_focus.user.js

]]>
Userscript: Youtube circumvent age verification https://www.codemadness.org/userscript-youtube-circumvent-age-verification.html https://www.codemadness.org/userscript-youtube-circumvent-age-verification.html 2013-02-21T00:00:00Z Hiltjo Userscript: Youtube circumvent age verification

Last modification on

This is an userscript I wrote a while ago which circumvents requiring to login with an account on Youtube if a video requires age verification.

Note: this is an old script and does not work anymore.

Download

Download userscript Youtube_circumvent_sign_in.user.js

]]>
Userscript: block stupid fonts https://www.codemadness.org/userscript-block-stupid-fonts.html https://www.codemadness.org/userscript-block-stupid-fonts.html 2012-10-21T00:00:00Z Hiltjo Userscript: block stupid fonts

Last modification on

This is an userscript I wrote a while ago which white-lists fonts I like and blocks the rest. The reason I made this is because I don't like the inconsistency of custom fonts used on a lot of websites.

Download

Download userscript Block_stupid_fonts_v1.2.user.js

Old version: Download userscript Block_stupid_fonts.user.js

]]>
Sfeed: simple RSS and Atom parser https://www.codemadness.org/sfeed-simple-feed-parser.html https://www.codemadness.org/sfeed-simple-feed-parser.html 2011-04-01T00:00:00Z Hiltjo Sfeed: simple RSS and Atom parser

Last modification on

Sfeed is a RSS and Atom parser (and some format programs).

It converts RSS or Atom feeds from XML to a TAB-separated file. There are formatting programs included to convert this TAB-separated format to various other formats. There are also some programs and scripts included to import and export OPML and to fetch, filter, merge and order feed items.

For the most (up-to-date) information see the README.

Clone

git clone git://git.codemadness.org/sfeed

Browse

You can browse the source-code at:

Download releases

Releases are available at:

Build and install

$ make
# make install

Screenshot and examples

Screenshot of sfeed piped to sfeed_plain using dmenu in vertical-list mode

The above screenshot uses the sfeed_plain format program with dmenu. This program outputs the feed items in a compact way per line as plain-text to stdout. The dmenu program reads these lines from stdin and displays them as a X11 list menu. When an item is selected in dmenu it prints this item to stdout. A simple written script can then filter for the URL in this output and do some action, like opening it in some browser or open a podcast in your music player.

For example:

#!/bin/sh
url=$(sfeed_plain "$HOME/.sfeed/feeds/"* | dmenu -l 35 -i | \
	sed -n 's@^.* \([a-zA-Z]*://\)\(.*\)$@\1\2@p')
test -n "${url}" && $BROWSER "${url}"

However this is just one way to format and interact with feed items. See also the README for other practical examples.

Below are some examples of output that are supported by the included format programs:

There is also a curses UI front-end, see the page sfeed_curses. It is now part of sfeed.

Videos

Here are some videos of other people showcasing some of the functionalities of sfeed, sfeed_plain and sfeed_curses. To the creators: thanks for making these!

]]>
Vim theme: relaxed https://www.codemadness.org/vim-theme-relaxed.html https://www.codemadness.org/vim-theme-relaxed.html 2011-01-07T00:00:00Z Hiltjo Vim theme: relaxed

Last modification on

This is a dark theme I made for vim. This is a theme I personally used for quite a while now and over time tweaked to my liking. It is made for gvim, but also works for 16-colour terminals (with small visual differences). The relaxed.vim file also has my .Xdefaults file colours listed at the top for 16+-colour terminals on X11.

It is inspired by the "desert" theme available at https://www.vim.org/scripts/script.php?script_id=105, although I removed the cursive and bold styles and changed some colours I didn't like.

Download

relaxed.vim

Screenshot

Screenshot of VIM theme relaxed on the left is gvim (GUI), on the right is vim in urxvt (terminal)

]]>
Seturgent: set urgency hints for X applications https://www.codemadness.org/seturgent-set-urgency-hints-for-x-applications.html https://www.codemadness.org/seturgent-set-urgency-hints-for-x-applications.html 2010-10-31T00:00:00Z Hiltjo Seturgent: set urgency hints for X applications

Last modification on

Seturgent is a small utility to set an application its urgency hint. For most windowmanager's and panel applications this will highlight the application and will allow special actions.

Clone

    git clone git://git.codemadness.org/seturgent

Browse

You can browse the source-code at:

Download releases

Releases are available at:

]]>
DWM-hiltjo: my windowmanager configuration https://www.codemadness.org/dwm-hiltjo-my-windowmanager-configuration.html https://www.codemadness.org/dwm-hiltjo-my-windowmanager-configuration.html 2010-08-12T00:00:00Z Hiltjo DWM-hiltjo: my windowmanager configuration

Last modification on

DWM is a very minimal windowmanager. It has the most essential features I need, everything else is "do-it-yourself" or extending it with the many available patches. The vanilla version is less than 2000 SLOC. This makes it easy to understand and modify it.

I really like my configuration at the moment and want to share my changes. Some of the features listed below are patches from suckless.org I applied, but there are also some changes I made.

This configuration is entirely tailored for my preferences of course.

Features

  • Titlebar:
    • Shows all clients of the selected / active tags.
    • Divide application titlebars evenly among available space.
    • Colour urgent clients in the taskbar on active tags.
    • Left-click focuses clicked client.
    • Right-click toggles monocle layout.
    • Middle-click kills the clicked client.
  • Tagbar:
    • Only show active tags.
    • Colour inactive tags with urgent clients.
  • Layouts:
    • Cycle layouts with Modkey + Space (next) and Modkey + Control + Space (previous).
    • Fullscreen layout (hides topbar and removes borders).
  • Other:
    • Move tiled clients around with the mouse (drag-move), awesomewm-like.
    • Add some keybinds for multimedia keyboards (audio play / pause, mute, www, volume buttons, etc).
  • ... and more ;) ...

Clone

git clone -b hiltjo git://git.codemadness.org/dwm

Screenshot

Screenshot showing what dwm-hiltjo looks like

]]>
Query unused CSS rules on current document state https://www.codemadness.org/query-unused-css-rules-on-current-document-state.html https://www.codemadness.org/query-unused-css-rules-on-current-document-state.html 2010-04-21T00:00:00Z Hiltjo Query unused CSS rules on current document state

Last modification on

Today I was doing some web development and wanted to see all the rules in a stylesheet (CSS) that were not used for the current document. I wrote the following Javascript code which you can paste in the Firebug console and run:

(function() {
	for (var i=0;i<document.styleSheets.length;i++) {
		var rules = document.styleSheets[i].cssRules || [];
		var sheethref = document.styleSheets[i].href || 'inline';
		for (var r=0;r<rules.length;r++)
			if (!document.querySelectorAll(rules[r].selectorText).length)
				console.log(sheethref + ': "' + rules[r].selectorText + '" not found.');
	}
})();

This will output all the (currently) unused CSS rules per selector, the output can be for example:

http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: "fieldset, a img" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: "#headerimg" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: "a:hover" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: "h2 a:hover, h3 a:hover" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: ".postmetadata-center" not found.
http://www.codemadness.nl/blog/wp-content/themes/codemadness/style.css: ".thread-alt" not found.

Just a trick I wanted to share, I hope someone finds this useful :)

For webkit-based browsers you can use "Developer Tools" and use "Audits" under "Web Page Performance" it says "Remove unused CSS rules". For Firefox there is also Google Page Speed: https://code.google.com/speed/page-speed/ this adds an extra section under Firebug.

Tested on Chrome and Firefox.

]]>
Driconf: enabling S3 texture compression on Linux https://www.codemadness.org/driconf-enabling-s3-texture-compression-on-linux.html https://www.codemadness.org/driconf-enabling-s3-texture-compression-on-linux.html 2009-07-05T00:00:00Z Hiltjo Driconf: enabling S3 texture compression on Linux

Last modification on

Update: the DXTC patent expired on 2018-03-16, many distros enable this by default now.

S3TC (also known as DXTn or DXTC) is a patented lossy texture compression algorithm. See: https://en.wikipedia.org/wiki/S3TC for more detailed information. Many games use S3TC and if you use Wine to play games you definitely want to enable it if your graphics card supports it.

Because this algorithm was patented it is disabled by default on many Linux distributions.

To enable it you can install the library "libtxc" if your favorite OS has not installed it already.

For easy configuration you can install the optional utility DRIconf, which you can find at: https://dri.freedesktop.org/wiki/DriConf. DriConf can safely be removed after configuration.

Steps to enable it

Install libtxc_dxtn:

ArchLinux:

# pacman -S libtxc_dxtn

Debian:

# aptitude install libtxc-dxtn-s2tc0

Install driconf (optional):

ArchLinux:

# pacman -S driconf

Debian:

# aptitude install driconf

Run driconf and enable S3TC:

Screenshot of DRIconf window and its options

Additional links

]]>
Getting the USB-powerline bridge to work on Linux https://www.codemadness.org/getting-the-usb-powerline-bridge-to-work-on-linux.html https://www.codemadness.org/getting-the-usb-powerline-bridge-to-work-on-linux.html 2009-04-13T00:00:00Z Hiltjo Getting the USB-powerline bridge to work on Linux

Last modification on

NOTE: this guide is obsolete, a working driver is now included in the Linux kernel tree (since Linux 2.6.31)

Introduction

A USB to powerline bridge is a network device that instead of using an ordinary Ethernet cable (CAT5 for example) or wireless LAN it uses the powerlines as a network to communicate with similar devices. A more comprehensive explanation of what it is and how it works you can find here: https://en.wikipedia.org/wiki/IEEE_1901.

Known products that use the Intellon 51x1 chipset:

  • MicroLink dLAN USB
  • "Digitus network"
  • Intellon USB Ethernet powerline adapter
  • Lots of other USB-powerline adapters...

To check if your device is supported:

$ lsusb | grep -i 09e1
Bus 001 Device 003: ID 09e1:5121 Intellon Corp.

If the vendor (09e1) and product (5121) ID match then it's probably supported.

Installation

Get drivers from the official site: http://www.devolo.co.uk/consumer/downloads-44-microlink-dlan-usb.html?l=en or mirrored here. The drivers from the official site were/are more up-to-date.

Extract them:

$ tar -xzvf dLAN-linux-package-v4.tar.gz

Go to the extracted directory and compile them:

$ ./configure
$ make

Depending on the errors you got you might need to download and apply my patch:

$ cd dLAN-linux-package-v4/     (or other path to the source code)
$ patch < int51x1.patch

Try again:

$ ./configure
$ make

If that failed try:

$ ./configure
$ KBUILD_NOPEDANTIC=1 make

If that went OK install the drivers (as root):

# make install

Check if the "devolo_usb" module is loaded:

$ lsmod | grep -i devolo_usb

If it shows up then it's loaded. Now check if the interface is added:

$ ifconfig -a | grep -i dlanusb
dlanusb0 Link encap:Ethernet HWaddr 00:12:34:56:78:9A

Configuration

It is assumed you use a static IP, otherwise you can just use your DHCP client to get an unused IP address from your DHCP server. Setting up the interface is done like this (change the IP address and netmask accordingly if it's different):

# ifconfig dlanusb0 192.168.2.12 netmask 255.255.255.0

Checking if the network works

Try to ping an IP address on your network to test for a working connection:

$ ping 192.168.2.1
PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_seq=1 ttl=30 time=2.49 ms
64 bytes from 192.168.2.1: icmp_seq=2 ttl=30 time=3.37 ms
64 bytes from 192.168.2.1: icmp_seq=3 ttl=30 time=2.80 ms
--- 192.168.2.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2005ms
rtt min/avg/max/mdev = 2.497/2.891/3.374/0.368 ms

You can now set up a network connection like you normally do with any Ethernet device. The route can be added like this for example:

# route add -net 0.0.0.0 netmask 0.0.0.0 gw 192.168.2.1 dlanusb0

Change the IP address of your local gateway accordingly. Also make sure your nameserver is set in /etc/resolv.conf, something like:

nameserver 192.168.2.1

Test your internet connection by doing for example:

$ ping codemadness.org
PING codemadness.org (64.13.232.151) 56(84) bytes of data.
64 bytes from acmkoieeei.gs02.gridserver.com (64.13.232.151): icmp_seq=1 ttl=52 time=156 ms
64 bytes from acmkoieeei.gs02.gridserver.com (64.13.232.151): icmp_seq=2 ttl=52 time=156 ms
64 bytes from acmkoieeei.gs02.gridserver.com (64.13.232.151): icmp_seq=3 ttl=52 time=155 ms
--- codemadness.org ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1999ms
rtt min/avg/max/mdev = 155.986/156.312/156.731/0.552 ms

If this command failed you probably have not setup your DNS/gateway properly. If it worked then good for you :)

References

]]>
Gothic 1 game guide https://www.codemadness.org/gothic-1-guide.html https://www.codemadness.org/gothic-1-guide.html 2009-04-12T00:00:00Z Hiltjo Gothic 1 game guide

Last modification on

Disclaimer: Some (including myself) may find some of these hints/exploits cheating. This guide is just for educational and fun purposes. Some of these hints/tips apply to Gothic 2 as well. I got the meat exploit from a guide somewhere on the internet I can't recall where, anyway kudos to that person. Some of the exploits I discovered myself.

Configuration

Widescreen resolution

Gothic supports widescreen resolutions with a small tweak, add the following text string as a command-line argument:

-zRes:1920,1200,32

This also works for Gothic 2. Here 1920 is the width, 1200 the height and 32 the bits per pixel, change this to your preferred resolution.

Fix crash with Steam version

Disable steam overlay. If that doesn't work rename GameOverlayRenderer.dll in your steam folder to _GameOverlayRenderer.dll. I strongly recommend to buy the better version from GOG.com. The GOG version has no DRM and allows easier modding, it also allows playing in most published languages: German, English, Polish, furthermore it has some original artwork and soundtrack included.

Upgrade Steam version to stand-alone version and remove Steam DRM (Gothic 1 and 2)

You can install the Gothic playerkit and patches to remove the Steam DRM.

WorldOfGothic playerkit patches:

Play Gothic in a different language with English subtitles

If you're like me and have played the English version many times, but would like to hear the (original) German voice audio or if you would like to play with different audio than you're used to, then you can copy the speech.vdf file of your preferred version to your game files. Optionally turn on subtitles. I've used this to play the English version of Gothic with the original German voice audio and English subtitles. This works best with the version from GOG as it allows easier modding.

Easy money/weapons/armour/other items

Steal from Huno

At night attack Huno the smith in the Old Camp and steal all his steel. Then make some weapons and sell them with a merchant. When you ask Huno about blacksmith equipment it will respawn with 5 of each kind of steel. This is also a fairly good starting weapon (requires 20 strength). Also his chest located near the sharpening stone and fire contains some steel as well, lock-pick it. The combination is: RRLRLL. The chest contains at least 20 raw steel, forge it to get 20 crude swords which you can sell for 50 ore each to a merchant. This will generate some nice starting money (1000+ ore) :)

Steal weapons from the castle in the Old Camp

This tip is useful for getting pretty good starting weapons.

Before entering the castle itself drop your ore (Left control + down for me) in front of it. This will ensure when you get caught (and you probably will ;)) no ore will get stolen by the guards. Now use the "slip past guard" technique described below and you should be able to get into Gomez his castle. Run to the left where some weapons are stored. Now make sure you at least steal the best weapon (battle sword) and steal as much as you can until you get whacked. I usually stand in the corner since that's where the best weapons are (battle sword, judgement sword, etc). You'll now have some nice starting weapon(s) and the good thing is they require very little attributes (about 13 strength).

Location: screenshot

Free scraper armour the New Camp

In the New Camp go to the mine and talk to Swiney at the bottom of "The Hollow". Ask who he is and then ask to join the scrapers. He will give you a "Diggers dress" worth 250 ore. It has the following stats: + 10 against weapons. + 5 against fire. This will also give you free entrance to the bar in the New Camp.

Unlimited water bottles in the New Camp

In the quest from Lefty you will be assigned to get water bottles from the rice lord. He will give you infinite amounts of water bottles, in batches of 12.

Armour amulet and increase HP potion

In the Old Camp in the main castle there are at least 3 chests with valuable items that don't require a key:

  • Middle right side (looking from the entrance), 1 chest:

    • lock combination: LLLLRLRL
    • loot:
      • +15 against weapons, +15 against arrows (amulet of stone skin) (worth: 1000 ore)
    • additionally there are 2 locked doors at the right side in this room. In the final room there are 3 floors with lots of chests.
      Video of the location

  • Left side, 1 chest:

    • lock combination: RLLLLLRR
    • loot:
      • +8 mana amulet (worth: 600 ore)
      • 2 potions (+70 hp)
      • dreamcall (weed)
      • 120 coins (worth: nothing)

  • Right side, 2 chests with:

    • lock combination: RLLLRLLR
    • loot:
      • armour amulets, +15 against weapons (worth: 600 ore)
      • maximum life potion, +10 maximum life (worth: 1000 ore)
      • speed potion (1 minute duration)
      • 4 potions (+70 hp)

Swamp/Sect Camp harvest twice

In the swamp-weed harvest quest you must get swamp-weed for a guru. After this quest you can get the harvest again, but you can keep the harvest without consequences.

Exploits

Slip past guards

This exploit is really simple, just draw your weapon before you're "targeted" by the guard and run past them, this bypasses the dialog sequence. When you're just out of their range holster your weapon again, so the people around won't get pissed off.

Works really well on the guards in front of the Old camp's castle, Y'Berrion templars and New Camp mercenaries near the Water magicians, just to name a few.

Video

Meat duplication

Go to a pan and focus / target it so it says "frying pan" or similar. Now open your inventory and select the meat. Now cook the meat (for me Left Control + Arrow up). The inventory should remain open. You'll now have twice as much meat as you had before. Do this a few times and you'll have a lot of meat, easy for trading with ore/other items as well. This exploit does not work with the community patch applied.

Glitch through (locked) doors and walls

You can glitch through walls by strafing into them. Then when the player is partially collided into a door or wall you can jump forward to glitch through it.

Video

Fall from great heights

When you fall or jump from where you usually get fall damage you can do the following trick: slightly before the ground use left or right strafe. This works because it resets the falling animation. There are also other ways to achieve the same thing cancelling the falling animation, such as attacking with a weapon in the air.

Video

Experience / level up tips

Test of faith (extra exp)

You get an additional 750 exp (from Lares) when you forge the letter in the new camp and then give it to Diego. You can still join both camps after this.

Fighting skeleton mages and their skeletons

An easy way to get more experience is to let the skeleton mages summon as much skeletons as they can, instead of rushing to kill the summoner immediately. After you have defeated all of them: kill the skeleton mage.

Permanent str/dex/mana/hp potions/items and teachers

When you want to get the maximum power at the end of the game you should save up the items that give you a permanent boost. Teachers of strength, dexterity and mana won't train over 100 of each skill. However using potions and quest rewards you can increase this over 100.

You should also look out for the following:

  • Learn to get extra force into your punch from Horatio (strength +5, this can't be done after level 100 strength). Talking to Jeremiah in the New Camp bar unlocks the dialog option to train strength at Horatio.

  • Smoke the strongest non-quest joint (+2 mana).

Permanent potions in Sleeper temple

This one is really obvious, but I would like to point out the mummy's on each side where Xardas is located have lots and I mean lots of permanent potions. This will give you a nice boost before the end battle.

Location, left and right corridor in the Sleeper temple: screenshot
Mummies, you can loot them: screenshot

Permanent potions as reward in quests

Always pick the permanent potion as a reward for quests when you can, for example the quest for delivering the message to the High Fire magicians (mana potion) or the one for fetching the almanac for the Sect Camp. Don't forget to pick up the potions from Riordian the water magician when you're doing the focus stones quest, it contains a strength and dexterity potion (+3).

Improve ancient ore armour further

In the last chapters the blacksmith Stone from the Old Camp is captured If you save him from the prison cell in the Old Camp the reward will have a few options. One of the options is improving the Ancient Ore armour.

Good early game weapons available in chapter 1

Orc Hammer

Location: in a cave near bloodhounds near the mountain fort.
It can be reached from a path from the swamp camp up to the mountain. Watch out for the bloodhounds. They can instantly kill you in the early game.

Location: screenshot
Stats: screenshot

Stats:

  • Type: one-handed
  • Damage: 50
  • Required strength: 22
  • Worth: 1000 ore

It has very low strength stat requirement and has high damage for the early game chapters. A downside is the lower weapon swing range. It is also a decent weapon against stone golems.

Old Battle Axe

Location: near Xardas his tower.
Watch out for a group of Biters lurking there.

Location: screenshot
Stats: screenshot

Stats:

  • Type: two-handed
  • Damage: 67
  • Required strength: 36
  • Worth: 1800 ore

It has a relatively low strength requirements and is available in game chapter 1 or could be sold for a decent amount.

Random/beginner tips

  • If you want to talk to a NPC, but some animation of them takes too long (like eating, drinking, smoking) you can sometimes force them out of it by quickly unsheathing/sheathing your weapon.

  • When in the Old Camp: Baal Parvez can take you to the Sect Camp, he can be found near the campfire near Fisk and Dexter. Mordrag can take you to the New Camp, he can be found near the south gate, slightly after the campfire near Baal Parvez.

    When you follow them and when they kill monsters then you also get the experience.

  • The NPC Wolf in the New Camp sells "The Bloodflies" book for 150 ore. When you read this book you learn how to remove bloodflies parts (without having to spend learning points). After you read the book and learned its skill then you can sell the book back for 75 ore. This investment quickly pays back: Per bloodfly: sting: 25 ore (unsold value), 2x wings (15 ore each unsold value).

  • The templar Gor Na Drak (usually near the old mine and walks around with another templar): talking to him teaches you how to learn to get secretion from minecrawlers for free.

  • The spell scroll "Transform into bloodfly" is very useful:

    • A bloodfly is very fast.
    • Can also fly over water.
    • The scroll costs 100 ore. Its the same price as a potion of speed, but it has no duration (just until you transform back).
    • You have no fall damage.
    • You can climb some steep mountains this way.
    • Some monsters won't attack you, but some NPCs will attack you.
    • Your attribute stats will temporary change.
    • It requires 10 mana to cast (low requirement).

  • Almost all mummies that are lootable in the game (Orc temple and The Sleeper temple) have really good loot: permanent and regular potions and amulets and rings.

  • Skill investments:

    • For melee skills:
      • Strength
      • One-handed weapons have a bit lower weapon damage but are less clunky and faster. You can also interrupt enemy attacks.
      • Two-handed weapons have the highest damage, but are slower.
      • Get at least the first tier of one-handed training. It will change the combat animations and make combat less slow and clunky.
    • For ranged skills:
      • Dexterity
      • Cross-bows have high damage and are very good.
        • Cross-bow: the path for cross-bow training is easier in the old camp. When you become the Old Camp guard Scorpio can train you. Later in the game in chapter 4 after some story progression he will train everyone.
    • For mage characters:
      • Investing a little bit into strength, lets say 30 STR is OK.
      • Magic skills are powerful but are a bit clunky and slow.
      • Joining the Old Camp (fire mage) or New Camp (water mage) for the magician path is probably easier.
    • Harvest animals:
      • Early investments of a few skill points into getting skins, teeth and claws from animals is OK (it is easy to get a lot of ore if you loot everything though).
    • Lockpicking: training in lockpicking only reduces the chance to break locks when you fail the combination. Investing in it is OK but not necessary. A small cheat: the lock pick combination stays the same, you can save and reload the game to avoid losing lockpicks.
    • Bad skill investments to avoid:
      • Sneak and pickpocket are nearly useless.

Overall recommendation: I'd recommend a hybrid of melee/magic or melee/range. Early game for melee: get max strength to 100 and get at least the first tier of one-handed training.
In the later game focus more on ranged combat or learning the magic circles.

Side-quest Chromanin / The Stranger

This describes an interesting side quest in the Gothic 1 game, which is not too obvious to find and may be overlooked.

The first Chromanin book is found by defeating the skeleton mage in the Fog Tower. On its bones you can find the Chromanin book. Reading the book starts the Chromanin / The Stranger quest. The books contain some typos, being demonicly possesed could be an excuse for that :)

Note that the Old books only spawn in a specific order after reading each found book. So they have to be done in this specific order.

Fog tower mage
Location
Map

Text:

"He who is willing to
renounce all depravity
and wanders on the path
of righteousness, shall
know where the source
of my power lies
hidden. So that he might
use it to break the chains
of this world and prove
worthy to receive Chromanin."

"The Wise One sees to
having a general overview before he
dedicates himself to his
next mission."

Chromanin

The clue is in the words "general overview" on the second page. One of the highest points on the map is the tower where you find and free the orc Ur-Shak from being attacked by other orcs.

The Wise One sees to having a general overview before he dedicates himself to his next mission".
Location: on top of the tower near where the orc Ur-Shak was.
Item: Old Book.

Item
Location
Map

Chromanin 2

Text:

"Carried from the tides
of time, Chromanin's
visions have opened my
eyes. No price could be
high enough to ever
renounce my faith in
them, for it touched my
heart too insensely."

"What is devided will be
reunited, after being
massively separated for
a short time."

Clue: "What is devided (sic) will be reunited, after being massively separated for a short time". Location: small island near the (divided) river near the Old Camp.

Item
Location
Map

Chromanin 3

Text:

"Oh, Ancient Gods. How
can it be that a man like
me, simple and unworthy,
may receive such great a
legacy. I feel great
fear to lose all of it
again by a slight
faltering in word or
deed."

"The wise fisherman
occasionally tries to get
lucky on the other side
of the lake."

Clue: a fisherman lake and (partially sunken hut) can be found close the the entrance of the New Camp. At the other side is the Old Book.

Item
Location
Map

Chromanin 4

Text:

"I dare not to be in
the presence of
Chromanin one day. Gone
are the days of wasting
and wailing. So easy it
will be to acheive
absolute perfection. I'm
not far from it!"

"Long forgotten are the
deeds of those who once
were aboard."

Clue: "Long forgotten are the deeds of those who once were aboard." A broken ship can be found near the beach at the entrance of the Fog Tower.

Item
Location
Map

Chromanin 5

Text:

"But I shall not walk this
path alone. This honor is
mine. I must accept to
share the power within
myself with the worthy
ones who are to come and
find me. I hope they're
coming soon..."

"You will find me where it all began."

Clue: "You will find me where it all began." Very obvious it is the same location as were the first book was found.

Item
Location
Map

Chromanin 6

Text:

"Empty pages"

Item

On the corpse is the last chromanin book. When reading this last book the book is empty. Then there is evil laugh and 2 skeleton mages and skeleton minions will spawn.

Chromanin quest log

Here are the texts in the quest log:

Quest log part 1
Quest log part 2

The End

When you use the tips described above Gothic should be an easier game and you should be able to get at a high(er) level with lots of mana/strength/hp.

Have fun!

]]>