movq.de phloggopher://movq.de/1/phlogmovq2026-02-10T16:17:00+00:002026-02-10--the-luminous-dead2026-02-10T16:16:42+00:00tag:movq.de,2026-02-10:phlog/the-luminous-dead
2026-02-10 -- "The Luminous Dead" (Caitlin Starling)
====================================================
(Some spoilers ahead.)
Oof!
This was advertised as SciFi horror, which isn't wrong: The main charac-
ter explores a deep cave on an alien planet, but for various reasons --
one of the main ones being that she must not give off any scent and must
remain mostly quiet, i.e. her voice must not be audible -- she's in a
suit with lots of sensors and gadgets and a *feeding tube*. Without the
connection to the main computer, she'd be completely lost.
But then there's also the woman on the other end of the comm line, who's
supposed to tell her where to go and what to do. Buuuuut is she trust-
worthy? ;-) And why is it only *one* person in that control center in-
stead of a whole team?
What I didn't expect was that it would turn into a romance. I usually
don't like that at all, but in this book, it worked really well for me.
They're (almost) never in the same room, they only talk over the comm
line and a video feed, which makes this thing way more interesting. The
weird power imbalance (the suit can be controlled remotely) is eventu-
ally overcome to some degree and actual trust is established -- I think.
It's certainly not the most healthy relationship, shall we say. But it
was great to read.
The book comes with a map and that is sorely needed. I mean, it's all
caves, so it would be hard to not get lost without a map.
Towards the end, there's a lot of paranoia and panic, running low on re-
sources. There are quite a few moments of "oh my god, no, please don't".
Then again, there aren't that many "scary entities" in the caves (or are
there?), so I'd rather classify this as "terror" instead of "horror", if
that makes sense.
It was one of the best stories I read these past few months, if not the
best. I'll certainly have a look at Caitlin Starling's other books.
This is a nice interview (only watch this after reading the book):
https://www.youtube.com/watch?v=lui6zspzHts
2026-01-11--thinking-about-software-licenses-again2026-01-11T07:33:38+00:00tag:movq.de,2026-01-11:phlog/thinking-about-software-licenses-again
2026-01-11 -- Thinking about software licenses again
====================================================
I have used very permissive licenses in the past. I just wanted to make
it easy for fellow "hackers" to use my code. If you see an MIT-licensed
project, you know that you don't have to worry about anything.
I *had* considered "Bad Actors". But back then, a Bad Actor was some
company that takes my code and incorporates it into a proprietary prod-
uct. Like, someone takes my window manager, modifies it, and uses it in
an embedded device. Or someone uses my Gopher server and ... I don't
know, what would you do with that in a commercial product? Okay, anyway,
suppose that had happened. Then what? I would first have to know about
this violation and then I would have to sue that company. This whole
scenario was extremely unlikely.
But the situation has changed last year.
Now we have "AI" crawlers and I *know* that they are scraping my website
like crazy. And now it is *evident* that they do not follow any license
requirements (assuming I had used the GPL). They break the rules and
everybody knows it (and most people don't care at all, because look how
shiny it is).
So here's what's bugging me: Since I use permissive licenses, I have no
right to complain about this. I still won't sue anybody, but even from a
"moral" standpoint, it's just my own fault. I allowed them to do this. I
enabled "AI" companies.
I was living under the assumption that I just host a small website that
nobody really cares about. Maybe a few people, maybe sometimes parts of
my code are useful to others. A small community.
That's not true anymore. Huge companies make money by using my stuff.
Granted, just a tiny fraction, but still.
So, what can I do about it?
- Take the website offline and move everything to Gopher or Gemini.
- Switch my projects to GPLv3.
I think that just moving everything to Gopher is probably just as naive
as my original approach. So maybe switch to GPLv3? For the vast majority
of my projects, I'm the sole copyright holder, so I can just do that.
My gripe with the GPL is that it's so hard to understand. I literally
can't read the original document and understand what it's about. It's a
long legal document. No way. The only thing I can do is read third-party
interpretations and then trust them. This isn't great.
Sigh. This needs some more thinking.
2025-11-19--maybe-more-java2025-11-19T18:24:34+00:00tag:movq.de,2025-11-19:phlog/maybe-more-java
2025-11-19 -- Maybe more Java?
==============================
For quite a while now, my "toolchest" has looked like this:
- C or Assembler for very low-level tasks.
- C with GTK for GUI programs.
- Python for "normal" programs or larger scripts.
- Shell scripts for glueing other tools together.
Rust is very, very slowly creeping in as well, I sometimes use it for
"systems programming" or other "low-level-ish" tasks. But I'll be hon-
est, Rust is so hard to learn, I always shy away from it and don't use
it often enough.
Regarding GTK: I loved it during the GTK2 days but now I have to admit
that I'm not that much of a fan anymore. It has become a pretty heavy
toolkit by now. I've pretty much stopped using it in my own code and
this now leaves a gap.
I've dabbled a bit with PyQt6. And I've noticed that I'm slowly getting
tired of Python's dynamic typing. I find it more comfortable to have a
compiler that reliably tells me when types are wrong (or when there are
typos), because frankly, I don't think I've ever really made use of dy-
namic typing *at runtime* (except for parsing JSON files into dictionar-
ies, I guess). My brain doesn't work that way. I want static typing. I
want to get all kinds of stuff out of the way before the program even
runs. That's just not the case with Python, you always have to test each
and every code path just to catch typos (or more serious errors). Some
people might say that this is good practice anyway, but I'm not con-
vinced -- how many projects really do have 100% test coverage? (I didn't
find tools like mypy or Python's type hints to be helpful.)
Long story short, I used IBM Java 1.0.1 on OS/2 Warp 4 for Advent of
Code in 2024 and that was a surprisingly nice experience. That's an an-
cient version, of course. I wasn't new to Java, but I stopped using it
while Java 6 was still the most recent version.
Java is bad in these regards:
- Does not produce native binaries. You always need a JVM.
- Memory management is annoying. Having to decide up-front how much
RAM my program is going to use is a bit silly.
- The code can get rather "wide" because many class names are very
long.
But on the other hand, these properties are great:
- Compiled language, static typing.
- Memory safety is not an issue.
- Produces small "binaries".
- The offical documentation is really good and a pleasure to work
with.
- It can do proper multithreading (unlike Python). I've already used
multithreading extensively during my Java 6 days and it was nice to
work with.
- OpenJDK is Free Software now.
- It appears to be very stable with few surprises and shenanigans, old
stuff still works, probably because it's used so much (?) in enter-
prise software.
And JVM ramp-up times are not an issue anymore. This used to be annoy-
ing, but that's a thing of the past. A "java Hello" takes 33 ms on my
box.
I'm currently toying around with making a little file manager in Java
that uses Swing for a GUI. Swing is *not that bad*.
I'll also try to catch up with new features that landed in Java.
And then we'll see. Maybe I'll get fed up quickly. Maybe not.
2025-11-05--my-current-reasons-against-ai2025-11-24T12:45:51+00:00tag:movq.de,2025-11-05:phlog/my-current-reasons-against-ai
2025-11-05 -- My (current) reasons against AI
=============================================
I'm talking about typical cloud-based tools here. Locally run models
that you train with your own data are a different story (in some re-
gards). This list is not exhaustive.
From a user's perspective
-------------------------
Everything that AI produces appears to be correct and legitimate. AI is
confident. It always *looks* right.
But it just isn't. It doesn't reason, it doesn't understand anything.
It's just a string of words or code that is likely to appear. And yet it
looks so convincing.
None of it is trustworthy.
This is dramatically different from internet search engines: They also
link to incorrect content, all the time. But they don't try to make it
look real. And they, by design, always link to the original source. This
means that I, the reader, can build a skill over time: I can learn to
estimate the trustworthiness of a search result. When I see search re-
sults from a random blog, I know that it's just a random blog and it
could likely be wrong. When it's a search result from an official docu-
mentation or a government page or whatever, than the trustworthiness in-
creases.
(You could do the same analogy with libraries and physical books.)
AI automatically introduces a certain framing. It might tell you some-
thing like: "I found a solution to your problem, you can do foo and bar,
because that does baz. Here are my sources: ..." It sounds and feels hu-
man, and that essentially tricks us into believing it. It doesn't just
give you the sources, it gives you an "interpretation" of them, except
that this interpretation is likely to be inaccurate. You have to learn
to ignore this interpretation. I found this to be quite tricky, because
it looks so good and convincing. By the time you reached the "here are
my sources" part (if there even are any), you have already ingested pos-
sibly inaccurate information. To fight this, you should treat the output
of AI like a random, bad blog, basically. And then you should only trust
the actual sources (based on their individual trustworthiness) and ig-
nore everything else. And that just leaves you with ... a fancy search
engine, but with lots of overhead.
In short, AI has the effect of gaslighting you.
I found it very hard to work with any of these AI tools, because I con-
stantly have to second-guess something that looks legitimate on first
sight. This is much more exhausting than just doing the research on my
own (using search engines) or writing the code myself. It's like pair
programming with a highly confident and charming intern with no work ex-
perience whatsoever -- you have to second-guess every move this person
makes. This slows me down.
AI isn't trustworthy, it doesn't understand anything (and never will, if
we keep using the current techniques). Not only is it not useful to me,
it is actively harmful.
From a moral/political perspective
----------------------------------
I'm already seeing the effects of AI with the (few) people that I teach:
When confronted with a task, they immediately reach for an AI bot and
then only work with that answer. They no longer do research on their own
-- but that is an important skill. You must be able to read different
sources, weigh them against each other, know which are definitive cor-
rect sources and which are not. You must have your own thought process,
your own understanding and reasoning. When you blindly trust the output
of an AI bot, you are worse off than before. You're back to "but it said
so", without having an actual understanding of your own.
I already had to clean up aftermaths of "vibe coding". People not know-
ing what the hell they're doing, but being proud of it, because the
shiny new AI did it. Interns will only learn how to use these AI tools,
they will not learn the actual skills.
AI is designed and intended to make you work less. This sounds good, but
in the case of programming or writing, it is bad. These are skills that
you have to learn yourself. You must be the programmer or the writer.
Here's an analogy that I once read: "You don't bring a forklift to a
gym." That nails it.
Another analogy: You can't claim to speak a language if you only ever
use translation tools. No sane person will hire you as a translater, ei-
ther, just because you put "I know how to use Google Translate" in your
resume.
If you don't learn these skills yourself, you will depend on the compa-
nies that sell you their AI tools, and that is my big concern. Do we re-
ally want that? Is that empowerment of the user? Is that better than de-
pending on search engines like we already do? Does this make the world a
better place?
*At the moment*, we still have a large percentage of people who are
(mostly) able to use AI "responsibly": They can use it to get new ideas,
learn about new approaches, without becoming a victim of their gaslight-
ing, because they can filter out the garbage (assuming they're not too
lazy to actually do that). They can only do that, however, because of
their pre-existing knowledge -- because they already are programmers or
writers. When you take this knowledge away, then it'll be a very differ-
ent story. And I am already beginning to see these effects in younger
people.
I want future generations to be good programmers. I want them to be able
to write their own documentation in their own words (and thus incorpo-
rate their own knowledge about their own software). I want people to be
independent from companies. I want people to be able to program when the
internet is down. I want to see strong, independent, intelligent human
beings.
From a webmaster's perspective
------------------------------
I host a website. It runs on a VPS at a cloud hoster. That website con-
tains my little blog and my software.
This server keeps getting overrun by AI bots. They viciously scrape
everything they can get their hands on. They come in huge waves, I've
seen about 1000 requests per second, which isn't even that much, others
get way more requests. And they are not smart: They often re-scrape the
same HTML page over and over, not even using If-Modified-Since. They ma-
liciously try to hide their identity.
All this puts a lot of strain on targets like me. You've probably seen
"this anime girl"[1] pop up on a lot of websites: That's Anubis and it
tries to counteract the effects of these bot attacks.
Even the Git repo viewer of the Linux kernel[2] uses Anubis.
This isn't good, this isn't sustainable. As a workaround, I had to block
several other cloud providers from accessing my website. Yes, not just
some individual servers, but entire cloud hosters -- which means that
you, a user of my website, can no longer do things like run your own RSS
reader and fetch my feed.
This is harmful to the internet at large. If this keeps going on, people
will be discouraged from doing self-hosting even more. What spam is to
email, AI bots will be to webhosting. This completely goes against the
ideas of a free and decentralized internet that I believe in.
From a legal perspective
------------------------
There's a double standard at work here: For decades, we've been told
that copyright is important, teenagers have been sued for downloads, we
pay extra fees for storage media like hard disks or USB sticks (at least
we do in Germany, "Urheberrechtsabgabe"). But now, all of a sudden, this
doesn't matter anymore *for the AI companies*? Suddenly they can use
the content of my website without asking or compensation, completely
disregarding any licenses? Huh?
Make them play by the same rules -- or open up everything to everyone.
But don't bullshit me just because this is new shiny tech.
I am absolutely aware that, in many areas, the law is (effectively) dif-
ferent for rich pepole. But this double standard is an integral part of
current AI tools. When you praise these tools, you either also praise
this double standard, or you must advocate for a change in law that
makes everything way more open.
And even if AI was perfect ...
------------------------------
There's another factor at play and it took me a while to realize this.
Let's assume that AI was perfect and none of the complaints above were
true.
I still would not want to use it.
The main motivation behind almost everything I do is that I want to ex-
plore and to learn. A (hypothetical, perfect) AI takes that away from
me.
I already dislike that I know so little about hardware and electronics.
I dislike that I do not know how to fix my car. I dislike that so much
that we do is "in the cloud". All this makes me depend on other people
(or rather, corporations), which is bad.
"But you can't know everything!"
Yes, but I would like to. I *want* to dig through the problems, I *want*
to write the code. I want to understand what's going on, not be a slave
to some machine that spits out an answer.
____________________
[1]: https://anubis.techaro.lol/
[2]: https://git.kernel.org
2025-11-02--workflows-and-file-management2025-11-02T17:13:11+00:00tag:movq.de,2025-11-02:phlog/workflows-and-file-management
2025-11-02 -- File management and workflows in terminals or GUIs
================================================================
My desktop stopped having icons about 15 years ago. And I stopped using
GUI file managers around the same time. It's all terminal-based since
then.
It's well-known by now that I have a certain affinity for the Windows
Explorer of the 95 to 2000 era. Every time I start a Win2k VM, I can't
help but think "wow, this was nice". Or maybe I'm just growing tired a
bit of my current workflow and setup.
PCManFM is surprisingly similar to Windows Explorer and you can even
make the Qt version look somewhat similar. I've been playing with this
program for the last couple of weeks.
It took me a while to realize that it's not really about file manage-
ment. It's about the entire workflow. Just using PCManFM every now and
then isn't meaningful. I've build so many small things that only make
sense in a terminal. I'd have to change everything from terminal-based
to GUI-based again. I'd need to have a desktop with icons again, I'd
need to have a start menu again. I'd need a Git GUI integration. And so
on, and so on.
Not only is that a lot of work, it's also less powerful.
I think a better way forward is to keep perfecting my terminal worklow.
There are rough edges that could need some improvement.
As a first step, I polished up some scripts -- some of which I've been
using unchanged for over a decade! -- that help with file management.
For example, I have a "clipboard" system for the terminal, i.e. I can
mark some files for copy/cut and then do the paste operation in some
other terminal. And I have "vmv", an interactive version of "mv" that
makes use of Vim. Stuff like that.
What really isn't great, though, is my very frequent use of "cd
$project". I'd like to have something like OS/2's "work folders" -- or
whatever they're called in English, I don't know, sorry. The feature
goes like this: You can mark a folder as a "work folder" and that makes
it a "container" for a "session" of programs. When you open that folder,
all the programs open with it. Close it and all the programs close. That
means you could make, say, "~/work/C/my-http-server" a work folder and
every time you open it, a Vim with the source code opened appears, the
documentation for some library opens (because you need to look up stuff
often), a terminal opens where you can run tests, and so on.
Just *opening* these things would be pretty easy, just make a shell
script. *Closing* them all in one go, *remembering their settings* and
so on, that would be the hard part. I doubt that I'm going to implement
all that. But maybe parts of it. Somehow.