CPUs are back (acording to Semianalysis)

Posted on Wed 18 February 2026 in CPU, GPU, Datacenters, Computer Architecture, Networks • Tagged with CPU, GPU, Datacenters, Computer Architecture, Networks

Semianalysis has published a fantastic analysis of the current state of CPUs and GPUs in datacenters, including a thorough history of CPUs and GPUs in datacenters. This is my summary.

Global idea: from 2023 to 2025Q4, AI training made GPUs more important than CPUs in datacenters. However, since then, Reinforcement Learning and vibe coding are making CPUs more important again.

Also important: ARM is growing (with processors like AWS Graviton and Nvidia Grace), AMD is growing and Intel is shrinking.

A brief history of CPUs and GPUs in datacenters

1990s: origin of the modern datacenter. The success of the PC made it possible to sustitute costly workstations and IBM mainframes. Intel creates chips for servers: Pentium Pro (1995) and Xeon (1998), with more cache than desktop chips.

2000s: dot com era. GHz race finished because of the end of Dennard scaling: around 2004 decreasing transistor size stopped proportionally improving the power consumption, so frequency could not increase further. Multicore era starts, with more cores, SMT (Simultaneous Multithreading) and multisocket systems instead of higher clock speeds …


Continue reading

The perfect strace command

Posted on Fri 17 October 2025 in Operating Systems, Debugging • Tagged with Operating Systems, Debugging

The Linux utility strace is essential for diagnosing process–kernel interactions, but its default output is often unusable. The key to effective debugging is using a specific set of flags that transform raw system call data into a structured, time‑stamped, and annotated log.

According to Avikam Rozenfeld in this presentation, here is the essential command template, followed by a breakdown of why each flag is critical:

strace -f -s 256 -o trace.log -tt -T -y <your_command_here>

Flags and why they matter:

  • -f — Follow children
    Purpose: Trace child processes spawned by fork/clone.
    Key benefit: Ensures you trace the entire application flow (e.g., piped commands).

  • -s 256 — Increase string size
    Purpose: Increase the string output limit (default 32 bytes) to 256 bytes.
    Key benefit: Prevents truncation of file paths and data being read or written.

  • -o — Output to file
    Purpose: Redirect all strace output to a specified log file (e.g., trace.log).
    Key benefit: Separates trace output from the program's standard output for easier analysis.

  • -tt — Precise timestamp
    Purpose: Prefix every line …


Continue reading

To build a new OS, or not to build a new OS?

Posted on Sun 07 September 2025 in Operating Systems, Software Development • Tagged with Operating Systems, Software Development, Linux, Omarchy, John Carmack, DHH

To build a new OS, or not to build a new OS?

There's a fascinating debate in the software world about whether it still makes sense to create a new general-purpose operating system from scratch.

The Pragmatic View

On one side, you have figures like John Carmack. In a recent discussion on X, he argued that building a new OS is often impractical. The cost, short lifespan, and developer burden rarely justify the effort, a lesson he learned from opposing Meta's custom XR OS.

The Idealistic View

On the other side is the spirit of the ultimate craftsman. This is captured perfectly in a joke by DHH during this presentation:

"People who are really serious about software should make their own operating system."

He's riffing on a famous quote by Alan Kay about hardware, but the message is clear: the ultimate challenge for a software purist is to build the whole stack.

A Middle Ground: Omarchy

Interestingly, DHH's own work offers a third path. He hasn't built an entirely new OS. Instead, he created Omarchy …


Continue reading

Problems with the GitHub Default Remote Using SSH

Posted on Wed 28 May 2025 in git, GitHub • Tagged with git, GitHub

Second time that I have this problem, so I thought I would write it down.

When I create a new repository on GitHub, the default instructions to add the remote repository to my local git repository are:

git remote add origin git@github.com:my_username/new_repo_name.git
git branch -M main
git push -u origin main

The problem is that this assumes that I'm using the SSH protocol to connect to GitHub. However, I prefer using HTTPS for my connections. Therefore, I need to change the remote URL to use HTTPS instead of SSH. To do this, I can use the following command:

git remote set-url origin https://github.com/my_username/new_repo_name.git

To avoid this, the instructions to add the HTTPS connection directly should be:

git remote add origin https://github.com/my_username/new_repo_name.git
git branch -M main
git push -u origin main

Publishing PowerPoint presentations with animations as PDFs

Posted on Wed 30 April 2025 in PowerPoint, PDF • Tagged with PowerPoint, PDF

When I create PowerPoint presentations, I often use animations to explain complex processes. I design slides so the final state is clear on its own, but sometimes this isn’t feasible. When sharing these presentations as PDFs, animations are lost, making slides hard to understand.

I’ve found a solution: PPSplit, a PowerPoint plug-in that exports presentations with animations as PDFs. PPSpit works by splitting animated slides into multiple PDF pages, each representing a step in the animation sequence. This preserves the flow and context of the presentation, ensuring viewers can follow the intended progression without needing the original PowerPoint file.

To use PPSpit in Windows, install the plug-in from its web page and open your presentation in PowerPoint. PPsplit will add a new tab to split the slides. Notice that this changes the file, so make a copy of the original presentation before using it, especially if you have auto-save enabled. After splitting, you can save the presentation as a PDF. The resulting PDF will have each animation step on a separate page, making …


Continue reading