In 3 days Pocket is shutting down, so it's high time to worry about a replacement, if you haven't already. I know a lot of folks took this as a cause to try more sophisticated knowledge management apps (hey @obsidian@mas.to 👋).

Personally, I was looking to keep the experience of the simple reading stack. And boy, @readeck@mastodon.online is exactly what I was looking for, and better than I was hoping for:

  • Open source, written in Go, simple frontend stack.
  • Self-hosted, tiny resource footprint, very snappy.
  • Does exactly what Pocket used to do, without any of the "discover random shit" nonsense.
  • Browser extension and neat workarounds for mobile OSes that make saving pages for later quick and easy. This was where a lot of alternatives fell through for me, saving links was just too fiddly.
  • Works equally well on desktop and mobile, supports PWA.
  • Pretty well functioning readable mode! And if I find something that doesn't work I could actually go and patch it, if I care enough. Pocket tends to swallow or mangle code blocks more often than not, which was a big pain.

Last but not least, it can import your data from Pocket, so migration is pretty smooth. It did take a few hours to chew through ~2500 articles I've apparently saved over the years, since it had to fetch the links and re-extract the content. This is actually one gripe I have with the Pocket export: it just gives you a CVS with links and light metadata, but it doesn't export the saved article content. If you have a 10 year old link in there somewhere, ~pray~ donate to web archive gods that it has been saved there. If you care.

One thing that I wish worked differently is that it splits its state between database (SQLite or Postgres) and disk. I kind of wish everything went into the database, so that can be backed up together with the rest of Postgres index, but beggars can't be choosers. I'll take it.

Nevkontakte shared 2 days ago
Nevkontakte shared 4 days ago
Nevkontakte shared 5 days ago

Repeat after me: every life is precious, every death is a tragedy.

Nevkontakte shared 13 days ago

Had a discussion with some colleagues about the potential of using for incident auto-mitigation.

It struck me that a lot of concerns boil down to the fact that we are not used to the idea that computers can also exhibit a failure mode we know as "human error". We are used to computers failing "as programmed". When framed that way, we've invented a lot of guardrails to prevent humans from doing dumb mistakes, and many of them can translate into the AI context.

I don't know where I'm going with it. Just a thought.

The thing about vibe coding is that it quickly gets you through the first 80% of the project, and slams you face-first into the second 80%.

Would you look at that! Another blog post about a command shell! It's almost as if being on vacation provides me with time to do something enjoyable 🙄

https://nevkontakte.com/2025/elvish.html

Nevkontakte shared a month ago

What’s the best Linux distro with KDE these days?

Which is exactly why I’m not gonna do it. Stop asking.

Nevkontakte shared a month ago

Me, every freaking time:

  • 😬 this mobile app surpassed my enshittification tolerance, is there something better?
  • 💡 oh, can I replace it with my own ?
  • 🕵️‍♀️ probably not, but let me google a little bit
  • 🥳 wow, there is a web API for that, that's almost too good to be true!
  • 😞 ah, of course. iOS doesn't support it.

So, today I spent a bigger part of it writing a noise generator that will be used in pat.junkie.dev to produce some more interesting events and behaviors. But that's not the interesting part.

The interesting part is that tried to use Gemini Code Assist in VSCode as much as possible. I did not go as far as full vibe coding mode, but I tried to save myself as much typing as I could. For the context, I was writing in Go. Here are my impressions so far, in no particular order:

  • good: it's great for "boring" stuff and boilerplate. It quickly bootstrapped me some unit tests, benchmarks and even fuzz tests. I definitely wouldn't bothered with the latter if I had to look it up myself.
  • good: it's great for refactorings that are easy to describe, but hard to do with standard tools. For example, rearranging a few dozen printf format strings and appropriate arguments is very annoying to do by hand, but Gemini did it in a bit.
  • bad: sometimes it sneaks in changes unrelated to what you've asked. Half the time they are fairly sensible (like removing a debug-print I totally forgot about), but I didn't ask for that and sometimes it's hard to convince it to only do what you need.
  • okay?: it's not a fast thinker. I am definitely tempted to tab away to a different window while it's generating a response. But it's still faster than without it, so I can live with that.
  • bad: sometimes it's being silly and creates a single-use variable that really don't need to be there. I guess it optimizes for self-explanatory code, but past a certain point it actually harms readability more than it helps.
  • good: it's great for throwaway things I don't care about. For example, I needed a python script to visualize the generated noise and it generated it easily. It was ugly as fuck, but I was going to delete it anyway, so as long as it works...
  • bad: explaining anything complicated or non-standard is so tedious that it isn't worth it. In my case, I wanted a variation of 1-d Perlin noise, but with for arbitrary points and with time.Time as the dimension unit. It really was easier to code it myself than watch out of all the type conversion bugs and bit-mashing mistakes it made.
  • meh: it's not great for things you have strong opinions about and want it in a particular way. Want a specific API? Gotta write out that interface, or you get something that doesn't match your broader intentions. Not very different from an intern, I guess :D
  • good: Gemini's large context window allows it to just slurp up my whole project and analyze it as a whole. It made surprisingly good responses for silly questions like "tell me what this project is and how it works".
  • bad: inline suggestions are nice, but they get messed up if editor.autoClosingBrackets is on.
  • good: With rare exceptions, the code it generates is correct. When it's not, it's a kind of mistakes I would have made myself. For example, who could have thought that math.MaxUint64 is actually an int (under very specific conditions)?
  • lolwhat: Sometimes it breaks style conventions really badly. Like, a variable val1_at_t1 in Go, really? But it's very rare. This is the only noteworthy example I caught today.

Overall, it is quite handy and allowed me to focus a lot more on the interesting bits of the logic. I also clearly have a room to improve at my prompting, and I couldn't be bothered to try and feed it a design doc to see what happens. But if definitely made the time more pleasurable.

Nevkontakte shared a month ago
Nevkontakte shared a month ago

I got up to some fishy business yesterday, and wrote a blog post about it 🐟

More seriously, I wanted to figure out how transient prompt works in projects like Powerlevel10k, and once I did I decided to write it down.

https://nevkontakte.com/2025/transient-fish.html

It's a bit sad that I rarely have enough energy for random exploration outside of a vacation… But at least it's good to know that my brain is not a boring tin can yet.

Well, it was great while it lasted. Any recommendations for alternatives? The main requirement is iOS sharing integration for saving links.

One thing to remember about (and, by extension, ) is that it is, at the end of the day, a technique for complex function approximation. No more, no less. Think back to Stone–Weierstrass theorem from the mathematical analysis course, just on a different scale.

It is hard to imagine writing down an analytical definition for the "human speech" function, but, amazingly, we can computationally arrive at something that is behaving very similarly, and we call our latest take at it "Large Language Models". The impressive thing about this is how unimpressive it really is for what it does.

When looking through that lens, it feels kind of silly to ascribe real intelligence to such models, since it's merely an imitation of the original phenomenon. But it does provoke some reflection on what the existence of such approximation tells us about the original.

I think it also indicates the limitations of the current generation of AI techniques: they can achieve great (perhaps arbitrarily great) accuracy when interpolating, that is, when we are working within the information space well-represented in the training dataset.

However, it's much harder to make assertions about extrapolation accuracy the ideas and knowledge not seen by the model before, never mind the ideas completely novel to the humanity entirely. To me this is a hint as to why AI is actually pretty bad at creativity. It's not so much because it's bad at creativity, it's because its extrapolation is rather unlikely to match what humans consider creative.

Does this make useless for any art, or novel research, or other forms of innovation? Not at all, I don't think. For one, all innovation consists of 1% of actually new ideas and 99% of hard and boring implementation/testing/experimental work, and any help with those 99% could still be a massive help. And even within 1%, random flailing of AI models can inspire humans into actually useful ideas :)

All of that it say, AI is just a better brush and it's silly to pretend it doesn't exist.