Welcome to the Hacker Podcast, where we dive into the most intriguing tech stories making waves across the internet! Today, we're exploring everything from building tiny apps for friends to counting yurts with AI, and from new takes on classic command-line tools to the future of databases.
Scrappy – Make little apps for you and your friends
Imagine a world where creating small, personal software for your friends and family is as easy as sketching on a whiteboard. That's the vision behind Scrappy, a research prototype aiming to revive "home-made software." This isn't about enterprise solutions or mass-market apps; it's about making software creation a creative, personal, and expressive activity accessible to anyone, echoing movements like "small computing" and "end-user programming."
Scrappy presents itself as an infinite canvas where you drag and drop interactive objects like buttons and text fields. You add functionality by attaching JavaScript code to events, and the environment is "always live"—no separate edit or run modes. It boasts built-in multiplayer capabilities with shared, persistent state, and the ability to share parts of an app using "frames." The design keeps data visible and tangible, much like a spreadsheet, simplifying debugging and remixing. Drawing inspiration from classics like HyperCard and Visual Basic, as well as modern tools like Figma, Scrappy deliberately avoids block-based programming and focuses on direct manipulation over AI-centric code generation, though AI assistance is a future consideration. While currently targeting "programmer DIYers" due to the JavaScript requirement, the goal is to lower the barrier for anyone with basic computer literacy. Think simple counters, chore trackers, or collaborative meeting tools—problems that benefit from being shared, tweaked on the fly, and used by a small, trusted group without account friction.
The community's reaction was a mix of excitement and thoughtful critique. A major point of discussion revolved around longevity and hosting. Many expressed concern about Scrappy being a hosted SaaS solution, fearing their personal projects would vanish if the platform did. They advocated for self-hostable, local-first, or decentralized approaches, drawing comparisons to TiddlyWiki and Hyperclay. The co-creator clarified that Scrappy is designed with a local-first architecture, relying only on a lightweight sync server, and that single-page HTML export was explored.
Discussions also heavily featured comparisons to other tools and paradigms. Users likened Scrappy's canvas-and-scripting model to HyperCard, Visual Basic, MS Access, and Delphi/Lazarus, noting the concept's historical roots. Some suggested that modern web technologies already serve this purpose for developers. The rise of AI and "vibe coding" was also brought up as a potential competitor, with some arguing LLMs can generate simple apps quickly. However, others countered that AI-generated code can be opaque for non-programmers to debug, and that Scrappy's direct manipulation offers a different, more engaging creative experience. Questions were also raised about the target audience and complexity, with the JavaScript requirement seen as a barrier for true non-programmers. Finally, the mobile experience was highlighted as crucial, with calls for mobile editing, not just usage, given the prevalence of phones as primary computing devices.
MiniMax-M1 open-weight, large-scale hybrid-attention reasoning model
MiniMax has unveiled MiniMax-M1, an open-weight, large-scale reasoning model that's turning heads with its innovative hybrid architecture. This model combines a Mixture-of-Experts (MoE) design with a novel "lightning attention" mechanism, promising significant advancements in efficiency and context handling.
MiniMax-M1 boasts a massive 456 billion total parameters, with approximately 45.9 billion activated per token. A standout feature is its native support for a staggering 1 million token context length, far exceeding many current models. The lightning attention mechanism is touted for its efficiency, reportedly using only 25% of the FLOPs compared to models like DeepSeek R1 for long generations. Trained with a new reinforcement learning algorithm called CISPO, MiniMax-M1 comes in two versions, M1-40k and M1-80k, referring to different "thinking budgets." Benchmarks provided by MiniMax show strong performance, especially in complex areas like software engineering, tool use, and long-context tasks.
The Hacker News community immediately jumped into the practicalities of running such a behemoth. Many pointed out the significant hardware requirements for full precision inference, estimating a need for 8x H200 GPUs, costing around $250,000. This sparked a lively debate about quantization, with some believing Q4 or Q8 quantization could make the model runnable on much cheaper hardware, potentially under $10,000, or even on high-memory Macs. However, others expressed skepticism, noting that heavily quantized models often don't fully match the performance of their unquantized counterparts.
Another significant thread revolved around the company's origin and transparency. While the GitHub repo and international website don't explicitly state the company's location, users quickly found evidence suggesting MiniMax is a Chinese company based in Shanghai, with an international entity in Singapore. This led to a discussion about whether such disclosure should be standard practice on project pages, especially for those evaluating the model for commercial use. Technically, users noted the model's structure involves a mix of linear and full attention, and there was speculation about the true cost of training, given the relatively low figure cited for the RL phase. The model's name also drew some lighthearted comments, with its similarity to Apple's M1 chip and the company name's derivation from a classic AI algorithm.
Show HN: Lstr – A modern, interactive tree command written in Rust
A new contender has emerged in the command-line utility arena: lstr
, a modern, interactive take on the classic Unix tree
command, crafted with Rust. The creator, a first-time Hacker News poster, developed lstr
out of a desire for more modern features like interactivity and Git integration, while maintaining the speed and minimalism Rust is known for.
lstr
offers high performance through parallel directory scanning and features dual modes: a classic tree
-like output and a powerful interactive Terminal User Interface (TUI). It can display rich information like file icons (with Nerd Fonts), permissions, and sizes. A standout feature is its Git integration, showing Git status (modified, new, untracked) directly in the tree view. It also intelligently filters using .gitignore
and allows control over recursion depth. The TUI mode is particularly useful, letting users select a path and print it to standard output, enabling seamless workflows like visually selecting a directory to cd
into.
The initial reaction from the community was overwhelmingly positive, with users calling it "cool," "impressive," and "useful," particularly praising the interactive mode and Git integration. However, a significant portion of the discussion quickly shifted to binary size. Initial reports noted a large debug binary (53MB), sparking a debate about Rust's perceived binary bloat. Commenters swiftly clarified that this was a debug build, and a release build is significantly smaller (around 4.3MB), with further optimizations capable of reducing it to 2-3MB. This led to a broader discussion about static vs. dynamic linking, the size of modern standard libraries, and whether binary size truly matters in an age of terabyte drives. Users also compared lstr
to other existing tools like eza
(a modern ls
replacement with tree view) and broot
(an interactive file manager), seeing lstr
as a dedicated, interactive tree
experience with unique Git status integration. The author also shared insights into learning Rust through the project, appreciating its robust ecosystem and tooling, despite a steeper learning curve.
I counted all of the yurts in Mongolia using machine learning
This fascinating project combines machine learning with a deep dive into modern Mongolian society. Inspired by a history podcast, the author embarked on a large-scale satellite image analysis to count yurts (or "gers") across Mongolia. The technical process involved training an object detection model, YOLO, to identify yurts in Google Maps satellite imagery. To manage the vast area, the search was refined to buffer zones around known human settlements using OpenStreetMap data. A custom FastAPI backend integrated the model with Label Studio, creating a feedback loop that rapidly improved accuracy by labeling over 10,000 yurts. The final model was scaled across 120 parallel workers using Docker Swarm to process millions of tiles, resulting in an estimated 172,689 yurts with a prediction score over 40%. Beyond the technical feat, the article delves into the historical context of yurts and the contemporary reality of urban ger districts, highlighting the challenges of providing infrastructure as nomadic populations move to cities.
The community's discussion brought a diverse range of perspectives, particularly challenging the author's framing of urban ger districts solely as a "failure" of public policy. Many emphasized the profound cultural significance of yurts and the nomadic lifestyle in Mongolia, pointing out that living in a ger, even in an urban setting, can be a conscious choice to maintain cultural connection or even a status symbol, not just a sign of necessity.
On the technical side, there was considerable discussion and critique regarding accuracy and methodology. Users questioned the lack of external validation against ground truth data and the potential for false positives. The choice of a 40% confidence threshold was debated, and some felt limiting the search area to settlement buffers might miss yurts in truly remote nomadic regions. A significant point raised was the potential violation of Google Maps' Terms of Service by systematically downloading tiles, with OpenStreetMap suggested as a more permissible data source. Amidst the technical debate, there was positive feedback on the practical application of ML and the use of Docker Swarm for scaling. And, of course, a humorous thread emerged discussing the grammatical ambiguity of the title itself.
3D-printed device splits white noise into an acoustic rainbow without power
Prepare to have your mind blown by a new acoustic device that acts like a prism for sound! Researchers have developed a 3D-printed device called an Acoustic Rainbow Emitter (ARE) that can take broadband white noise from a single source and passively split it into its different frequency components, directing each frequency in a different direction.
Unlike most artificial sound control systems that rely on active electronics or resonance, this device works purely through passive scattering. Its intricate, complex shape is the result of computational morphogenesis, specifically using topology optimization and finite element analysis. This design process allowed researchers to iteratively refine the structure to achieve the desired frequency-dependent scattering pattern in free space. They also designed a related device, a "lambda splitter," which separates low and high frequencies into different paths. The researchers emphasize that this work demonstrates the power of computational design and 3D printing for manipulating sound fields without needing any power source.
The community on Hacker News was buzzing with excitement and curiosity about this passive sound manipulation. Many immediately drew parallels to other wave phenomena, with one user sharing an anecdote about corrugated building surfaces acting as acoustic diffraction gratings during a thunderstorm. Others compared the device to optical prisms and even the human cochlea, though some pointed out key differences in their mechanisms.
The "crazy shape" of the device sparked discussion about the design process. While some initially suggested "machine learning," others clarified that the paper describes it as topology optimization guided by a defined objective function—essentially a sophisticated search algorithm to find the optimal shape. This led to speculation about whether a "nicer," more symmetric shape could achieve similar results, with the counterpoint being that computational optimization isn't constrained by biological growth processes and can find potentially irregular global optima. The potential applications generated a lot of brainstorming, from adaptive sports for the visually impaired using frequency for angle and amplitude for distance, to robot swarm localization, tracking machinery rotation, and even architectural noise control. Some more whimsical ideas included building a musical instrument or embedding hidden messages in white noise. Overall, the community found the concept "cool," "neat," and even "witchcraft," highlighting the novelty of achieving complex sound manipulation through purely passive, computationally designed structures.
Locally hosting an internet-connected server
For anyone who's ever tried to host a server from home, you know the pain: dynamic IP addresses, Carrier-Grade NAT (CGNAT), and often broken IPv6 support from ISPs. This article tackles these frustrations head-on, presenting a technical solution: using a small Virtual Private Server (VPS) with a static public IP address as a gateway to tunnel traffic to and from your home servers.
The core idea involves setting up a VPN tunnel, likely using WireGuard, between the VPS and your home network. The VPS, with its static public IP, receives incoming connections. Policy routing is then configured on the VPS to forward traffic intended for specific services or IP addresses down the WireGuard tunnel to the corresponding server(s) at home. Return traffic from the home server(s) is routed back through the tunnel to the VPS, which then sends it out to the internet using its public IP. This setup allows multiple home servers to be accessible from the internet on standard ports, bypassing common ISP limitations and providing a stable public presence.
The community's discussion highlights the widespread frustration with ISP practices that necessitate such complex workarounds. Many echoed the sentiment that the poor state of IPv6 adoption and implementation by ISPs, particularly in North America and Western Europe, is a major driver for these solutions. Users report unstable IPv6 prefixes, frequent address changes, and a general lack of reliable support, forcing them back to IPv4 workarounds.
Alternative approaches were discussed, including dynamic DNS (dismissed as insufficient for multiple services on standard ports) and port forwarding/reverse proxies (less flexible for non-web protocols). Several users proposed using managed VPN services or tunneling solutions like Tailscale/Headscale or Cloudflare Tunnels as simpler alternatives that abstract away some of the WireGuard and routing complexity. Cloudflare Tunnels, in particular, was noted for not requiring open firewall ports, though some expressed reservations about relying on a third party. The impact of CGNAT on end-users was a significant point of discussion, with some arguing it's irrelevant for the average person, while others countered that it causes problems like frequent CAPTCHA challenges, IP bans, and hinders peer-to-peer applications. The inability to receive unsolicited incoming connections was seen by some as an "enshittification" of the internet, moving away from its original decentralized design. Overall, the discussion underscores the ongoing struggle for users who want to self-host services from home, leading to a variety of creative, sometimes complex, technical solutions to regain direct internet connectivity.
Show HN: Workout.cool – Open-source fitness coaching platform
Workout.cool is a brand-new open-source fitness coaching platform that's making waves, and it comes with a compelling backstory. It's a revival and evolution of a previous open-source fitness app, workout.lol, which was abandoned after being sold. The original main contributor decided to rebuild the platform from scratch, focusing on long-term sustainability, a better architecture, and resolving past issues like video licensing.
Workout.cool is 100% open-source under the MIT license, offering a database of over 1200 exercises with videos, attributes, and translations, along with progress tracking and multilingual support. It's also self-hostable, built with a modern tech stack including Next.js, Prisma, and PostgreSQL. The author emphasizes that the project is not for profit, but driven by a passion for open fitness tools and strength training. The roadmap includes exciting features like a mobile app, gamification, advanced stats, wearable integration, and a community forum.
The community showed significant interest, with many expressing enthusiasm for the project's open-source nature and its inspiring revival story. The original author of workout.lol even chimed in, expressing happiness that the project was being maintained again. However, the launch experienced immediate technical issues, with numerous users reporting "Error loading exercises" due to unexpected high traffic from Hacker News, a common challenge for new projects hitting the front page. The author quickly responded, confirming fixes were being deployed.
Beyond the initial glitches, a major theme in the comments revolved around the user experience and workout generation logic. Several users, from fitness enthusiasts to beginners, found the initial muscle/equipment selection flow confusing or overly technical, suggesting it assumed too much prior knowledge. They proposed simpler entry points like goal-based presets ("full body," "beginner routine") or browsing exercises by movement patterns. Critiques were also raised regarding the algorithm for generating workout plans, with experienced users pointing out that the current logic (e.g., suggesting 33 exercises for a session without considering order or balance) doesn't align with effective fitness programming principles. Suggestions included focusing more on robust logging and tracking, enabling users to build and share their own routines, or incorporating input from actual trainers. The author actively engaged with the feedback, showing openness to suggestions and collaboration, and acknowledging that improving routine creation is a high priority.
Introduction to the A* Algorithm (2014)
Today, we're revisiting an evergreen classic: "Introduction to the A* Algorithm" from Red Blob Games. Despite being from 2014, its clear explanations of pathfinding on a graph continue to make it a go-to resource. The article elegantly walks us through a family of graph search algorithms, starting with Breadth-First Search (BFS), which explores equally in all directions to find the shortest path in terms of steps. Then comes Dijkstra's Algorithm, an evolution that accounts for different movement costs between locations, using a priority queue to find the lowest-cost path.
Finally, we arrive at A*. This algorithm combines the best of both worlds: like Dijkstra's, it uses a priority queue and tracks the cost incurred so far (g
). But it also incorporates a "heuristic" function (h
) that estimates the distance from the current location to the goal. The priority for A* is the sum of the cost so far and the estimated cost to the goal (g + h
). This heuristic guides the search directly towards the destination, making it far more efficient than Dijkstra's, especially on large graphs, while still guaranteeing the shortest path if the heuristic never overestimates the true distance.
The community's discussion is always buzzing with appreciation for the article's clarity and the algorithm itself. Many highlight the elegant connection between these algorithms: BFS, Dijkstra's, and A* can all be seen as variations of the same core search loop, primarily differing in the data structure used for the "frontier" (the set of nodes to visit) and how they prioritize nodes within that structure. There's also a discussion about the heuristic function in A*. While an "admissible" heuristic (one that never overestimates) is necessary for an optimal path, users note that "inadmissible" heuristics are sometimes used in practice to gain performance at the cost of optimality, or even to create more "natural" behavior for agents in games.
Speaking of games, a significant thread revolves around A*'s application in game development. Red Blob Games is consistently praised as a fantastic resource for game developers. However, a debate often arises about whether A* is "realistic" for game AI, with some arguing it's a "performance hack" that assumes complete map knowledge. The counter-argument is that A* provides a necessary and efficient foundation, and realism can be layered on top using incomplete graphs, dynamic costs, or combining A* with other techniques. Ultimately, for most games, performance and engaging gameplay are prioritized over perfect simulation realism. The recurring theme of this article being periodically posted on Hacker News is also noted, with users defending its continued relevance as an "evergreen" piece.
Terpstra Keyboard
Step into the fascinating world of alternative musical tunings and keyboard layouts with the Terpstra Keyboard WebApp. This browser-based implementation of Siemen Terpstra's late 1980s design provides a playable interface for exploring beyond the standard 12-tone equal temperament and traditional piano layout.
The web app allows users to experiment with different tunings (using the Scala format), adjust layout parameters like hexagonal key size and rotation, and select various instruments. It's an open-source project under the GPL-3.0 license, encouraging community involvement for fixes and features via GitHub.
The community's discussion reveals a vibrant interest in alternative music theory and instrument design. Many expressed enthusiasm for microtonal music and non-standard tunings, recommending various artists and albums as entry points into the genre, from Kyle Gann's Hyperchromatica to King Gizzard and the Lizard Wizard's microtonal works. The discussion extended to alternative keyboard layouts, with particular interest in isomorphic keyboards like the Jankó layout, which some find easier for playing by ear. Users shared links and noted that the Terpstra WebApp can even mimic layouts like the B-griff used on some accordions.
A significant thread explored hardware implementations of these non-standard layouts. Users discussed building custom MIDI controllers using modern mechanical keyboard switches with hall effect sensors for velocity sensitivity. Existing, albeit often expensive, dedicated microtonal controllers like the Lumatone and Axis-49 were mentioned, along with more affordable alternatives. A related tangent delved into the idea of foot-controlled keyboards for musicians wanting to play bass lines or chords while using their hands for another instrument, bringing up historical examples like organ pedalboards and the Moog Taurus. From a technical perspective on the web app itself, users praised its responsiveness, attributing it to the use of the Web Audio API, preloaded audio samples, and vanilla JavaScript. Finally, a brief, unrelated but interesting etymological tangent discussed the surname "Terpstra," its Frisian origins, and pronunciation challenges.
Timescale Is Now TigerData
Timescale, the company renowned for its time-series database extension for PostgreSQL, has announced a significant rebrand, changing its company name to TigerData. This move signals their evolution beyond solely time-series data, now positioning themselves as the "modern PostgreSQL for the analytical and agentic era."
According to co-founders Ajay Kulkarni and Mike Freedman, Timescale began eight years ago focusing on time-series applications on PostgreSQL, a choice they admit was initially seen as "crazy" during the peak of NoSQL hype. They now assert that PostgreSQL has "won," with NoSQL databases like MongoDB, Cassandra, and InfluxDB being "technical dead ends." The company highlights impressive growth, reporting 2,000 customers, mid-eight-figure ARR with over 100% year-over-year growth, and $180 million raised. They state that the majority of workloads on their cloud product are no longer solely time-series, but include real-time analytical products and large-scale AI workloads. This shift led to the name change to TigerData, symbolizing speed, power, and precision, referencing their long-standing tiger mascot. While the company name is changing to TigerData and their cloud offering to Tiger Cloud, the core open-source PostgreSQL extensions, TimescaleDB (for time-series) and pgvectorscale (for AI/vector workloads), will retain their original names.
The announcement sparked a lively discussion with mixed reactions from the community. A significant theme revolved around the new name itself. Many immediately pointed out potential confusion with other tech companies using "Tiger" or similar animal themes, such as TigerBeetle, WiredTiger, Tigris Data, and TigerGraph. Some felt "Timescale" was a stronger, more unique brand, while "TigerData" sounded generic or "cheesy."
Another prominent point of discussion was the tone of the announcement, particularly the strong claims about the demise of NoSQL databases. Several users found this marketing language "bold," "arrogant," or "off-putting," arguing that technologies like Cassandra and InfluxDB are still widely used and evolving. An InfluxDB founder even weighed in, defending their current 3.x version. On the technical and product side, existing Timescale users shared positive experiences, praising its reliability for handling large volumes of time-series data. However, one user expressed frustration with achieving good performance for historical data, though a Timescale representative offered to discuss this feedback directly. Finally, some reacted negatively to the description of the company culture, finding terms like "tiger cubs" and "the jungle" "cringe." Despite the varied feedback, there was general acknowledgment of Timescale's technical contributions to PostgreSQL and well wishes for their future under the new name.