They say to “just build”, “just post”, “just do things” so I’ll do just that. The goal is weekly posts, no structure to what gets posted. Technical blogs will not be rushed for the sake of posting (most of the people doing them regularly are heavily AI slop anyway, the internet does not need one more) so the gaps will be filled with whatever is top of mind that week.
Top of Mind
I am interested in full-stack computing. From the transistors to the electron-slop memory leaking on your device. For now the idea is breadth-first and depth where I find value. I’ll read and post about the literature™️ or trending topics if they become headlines (ex: this week DRAM, HBM, and SRAM became cool again) but my projects will remain focused on a select few things in AI/ML to start.
My main goal at the moment is to understand the landscape of AI/ML. I know that doesn’t mean anything anymore but to me it means understanding the hardware, model architecture, and systems built to serve & productize (agent harnesses) the models. Do I believe we are in rapid takeoff and my white collar job is at risk? Leaning towards yeah. So the best way to deal with this anxiety is to become irreplaceable even long after agents are running the show.
What’s Next?
We start with GPUs. Here are 2 YouTube videos/playlists I am starting:
- Programming Massively Parallel Processors Lecture Series by Professor Izzat El Hajj from American University of Beirut
- This one is more of a conceptual watch to act as a supplement as I go through the actual book
- CUDA Course on YouTube
- Shoutout Elliot, great follow on Twitter. This one will be me more hands-on building projects.
As mentioned, I’ll also be going through the PMPP book, updating the blog with any random resources, videos, or challenges I find useful. First two I’ll check out will be the GPUMode discord/site & Modular’s problem set—will report back on those.
In parallel, I’ll start to build an intuition for Deep Learning and develop strong fundamentals in ML. Eventually building up to Transformers, GPT, MoE, etc. Starting with François Chollet’s Deep Learning with Python book 🙂
Lastly, I’ll explore how different groups are building their agent harnesses. I really like Opencode and the Claude Agent SDK so I’ll likely clone both repos and start poking around in there but I wouldn’t expect any projects for the short-term.
None of this is set in stone, the landscape is evolving rapidly so I don’t want to get too caught up in the software abstractions. The reason I am starting with GPUs and Deep Learning fundamentals is because those are highly unlikely to shift in 2026-2027 and the intution I pick up from this material will last me long into the AI Agent takeover of the white collar economy.