Back to Home
Squirrel Burgers There’s a stomach churning parable among people who think a lot about agile as a software development orthodoxy that is commonly attributed to Ken Schwaber (I pulled this particular telling from this post on Scrum.org).
“A man walks into Fat Burger and orders a Double Fatburger, fries, and a drink.
Man only has $3.15 but the total comes to $7.15. The manager tells him he’s going to remove something from his order.
I’ve been making some improvements/changes to the homelab, and I would now consider this to be the third iteration of the homelab. In the Dickension tradition, I’ll go through the past, current, and future states of my homelab.
Ghosts of Homelabs Past The first iteration was an ever-growing pile of NUCs running Debian. Initially, the idea was to run Kubernetes orchestrated by Pulumi, but the overall complexity of the system was unmanageable, and I also made some odd architectural decisions, such as writing persistent volume claims to SMB stores (yikes).
A look back at feature releases We’ve come a long way since I last wrote about Precis (my open-source AI-enabled RSS Reader).
A couple highlights of new features that we’ve introduced:
v0.3.3: a hybrid LMDB storage handler that stores content in the filesystem to help keep the database size manageable, and a nix flake for developers. v0.3.0: the ability to fully delete feeds, including their feed entries and feed content, from the database.
It started with virtualization, but I regret to announce that I am now running NixOS on bare metal.
Over the past week, I installed NixOS on one of my laptops, a Lenovo ThinkPad T14 Gen 2. It’s relatively modern, so there was never a doubt that it could run NixOS. However, I decided to install it on the laptop (which was previously running PopOS) because I wanted to try a tiling window manager, and in particular Hyprland - running on the Wayland display server.
Over the past week, I’ve been playing around virtualization. Over the course of things, I’ve learned a lot about virtualization, and a little about myself, too.
Let’s start on the setup. I install Proxmox VE on what’s theoretically my gaming PC - a Lenovo Thinkstation P520 with a 6-core, 12-thread Intel Xeon W-2135 workstation CPU and 64gb of DDR4 ECC RAM. Typical of enterprise workstation hardware, it’s a bit more powerful than your average PC and memory-dense, but less so than a rack-mounted generalist server.
This post is about an idea that’s been rattling around in my head lately.
I find that it helps my creative process to write about partially formed ideas. That way, I can flesh them out and see if there’s anything there. So - how about this one?
The Main Thought What if, instead of having LLM-based agents call tools directly, you instead provided them a registry of tools that they could use?
Leetcode doesn’t actually assess skill I’ve been thinking a lot about the state of skills evaluation in the tech industry. Usually, when you apply for a job, you meet with a recruiter, then the hiring manager, then you do a technical screen, and then a panel screen. Sometimes the technical screen comes before the hiring manager screen (which is a bad idea - it’s not a good use of time for the candidate, because mission/vision fit should be the most important thing to look for in the market).
I recently upgraded my phone to the Samsung Galaxy Z Fold 6, and I have some feelings about its design, the design and promise of foldables in general, and some of its quirks and eccentricities.
Upgrade Logic I upgraded to the Z Fold 6 from the Galaxy S21 Ultra, and I considered a few different phones in making that choice.
I like the fun and size of the Z Flip 6, and it’s not as expensive as the Fold, but I didn’t like that it had only 8 GB of RAM.
This past weekend, I had the opportunity to try Streamlit - the previously buzzy, now acquired by Snowflake framework for building dashboards and simple (or not so simple) data applications.
In short, I had a great time, and it’s gotten me thinking about the state of business intelligence (BI) applications, and why many engineers (myself included) find working with and integrating BI applications into the software data stack to be frustrating and unenjoyable.
I’ve had the HHKB Studio for about a month now. I’ve been using it as the main keyboard for my Mac Mini, which I primarily use for coding personal projects, such as this website. Indeed, over the course of the month I used the keyboard to build the first iteration of the backend for venmo-calculator as well as the redesign of the theme for my blog (more details on that coming soon).
It occured to me the other day that now that the application model for modern computing has largely shifted to the web, and therefore browsers, I find that I often don’t know what to call the silly, just-for-fun one-off projects that I build sometimes.
Take, for example, venmo-calculator - Is this an app? It is a program? It’s technically a compiled golang executable for the backend and a static html+js website that’s compiled in the sense that vue turned a vue component into javascript, but that javascript is itself interpreted?
On my recent trip to japan I had the opportunity to visit one of the few places that sells HHKB products in-store.
While I’d previously had experience with the HHKB, it had been quite some time since I was actively engaged with mechanical keyboards as a hobby. For the most part, I had coalesced on the HHKB Pro 2 as my keyboard of choice, the layout worked well for me, and the Topre switches were enjoyable, but I no longer was as discerning when it cames to switches as I used to be.
Precis (properly Précis, pronounced “pray-see”) is a extensibility-oriented RSS reader that can use LLMs to summarize and synthesize information from numerous different sources, with an emphasis on timely delivery of information via notifications.
Get it here.
Technical Details Precis is a FastAPI monolith that serves fully static pages styled by Tailwind CSS using DaisyUI components. It uses some query-parameter and redirect chicanery to fake interactivity. We’ll probably add actual interactivity at some point.
The purpose of this post is to explain how I like to Do Work and the principles underlying those decisions. This not meant to say that I consider the way I work to be the only correct one, nor that I am unwilling to try other ways of working. However - the impression that I would like readers to come away with is that the way I work is principled, considered, and intentional.
Most of the posts on this blog are my opinion. The same can be said of most posts on most blogs. Sometimes, however, I like to write a blog post that does the other thing for which we use the internet: conveying and sharing information. This is one of those blogs (but not the only one)
Homelab 2.0 Historically, my homelab has been a series of NUCs scattered around my house, connected to nodes in my mesh wifi network.
One thing that’s very important to me is maintaining a healthy media diet. While I’m not averse to recommendation algorithms, I find that they optimize for engagement and on-platform time, as is required to produce unending growth in ad revenue. As a result, I find that such algorithms sacrifice quality for quantity, or outrage impact, or speed. At its extreme, in the case of the conservative control of mainstream media, this disregard for quality manifests as disregard for truth.
I regret to inform you that I’ve been playing around with AI art and I’ve been having a great time doing so.
I think it’s accurate to say that I’m an AI skeptic. However, over the past weekend I came down sick and so had some free time on my hands, and so I tried playing around with image-to-text models hosted locally. I have a pretty old (i5-4460) desktop PC that doubles as an occasional gaming PC with a GTX 1070 in it, and I installed Debian on it (for minimum overhead) and played around.
It’s quite easy to find dystopian sci-fi.
I think that this is because our current state and path leads us exactly towards one. However, it is harder to find the opposite; while it’s hard to imagine any fully utopian science fiction that provide the stakes, structure, or depth necessary to also make an entertaining book, I think that there are places to find science-and-speculative fiction that is written with better days in mind.
I’ll say it - AWS Athena is criminally underrated as a platform for processing complex, non-interactive business data workloads.
I think that Athena has a bit of a reputation for being more of a json-wrangling, log-querying tool than one for business-centric workloads. However, I suspect that this is largely because AWS loves to cross-promote its products, so Cloudwatch features prominently in the documentation. It is, in fact, equally proficient for business data if you know what you’re getting yourself into.
By any objective measure, I should be super excited about advances in AI and its recent entrance into ubiquity. However, every time I hear about the latest GPT iteration, the latest startup that uses generative LLMs to revolutionize XYZ industry, I just feel bad.
Here’s a bulleted list (in no particular order) of all the reasons why I think AI makes me feel bad:
In a vague science-fictional way I’ve always expected “Artificial Intelligence” to be more neuro-mimetic.