A vision-language model fine-tuned on your CAD library and a year of factory-floor footage is a digital-twin starter kit on rails. It catches the gap between as-designed and as-built — the gap that eats manufacturing margin — without instrumenting a single new sensor.
Musings
2026 MAY
Kolmogorov–Arnold networks run slower than MLPs at inference. In regulated industries — banking, pharma, defense — that trade is worth it: KAN edges are fixed, learnable functions you can read. MLP weights are an opaque chord. When auditors come knocking, an open book beats a black box.
A quantized 12B-parameter SLM with full PubMed RAG runs on a Jetson-class board. Doctors Without Borders, off-grid, 400:1 patient ratios, no internet. The frontier is not always where you think it is — sometimes it is exactly where the cloud cannot reach.
Most SMB AI projects start life as fine-tuning and end as RAG. Fine-tuning teaches the model to sound like you; RAG teaches it to know what you know. Almost everyone needs the second. They rarely need the first.