<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on JoeSindel.com</title><link>https://joesindel.com/tags/ai/</link><description>Recent content in Ai on JoeSindel.com</description><generator>Hugo</generator><language>en</language><lastBuildDate>Sat, 25 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://joesindel.com/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Building a Private AI Server with NVIDIA Jetson AGX Thor</title><link>https://joesindel.com/posts/thor-ai-home-server/</link><pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate><guid>https://joesindel.com/posts/thor-ai-home-server/</guid><description>&lt;p>A few days ago I got my hands on an NVIDIA Jetson AGX Thor developer kit. 128GB of unified memory, a Blackwell GPU, and enough raw compute to run serious language models locally. This post covers the full build: three inference backends, voice chat from my phone over cellular, video-based object detection, a complete monitoring pipeline, and a custom dashboard. Everything survives a reboot, nothing touches the cloud.&lt;/p>
&lt;hr>
&lt;h2 id="the-hardware">The Hardware&lt;/h2>
&lt;p>The Jetson AGX Thor is not a consumer product. It&amp;rsquo;s a developer kit built for robotics and edge AI workloads. The specs that matter for LLM inference:&lt;/p></description></item></channel></rss>