The GPLv2's Analog Address Experiment
Curious about the inclusion of a physical mailing address in the 1991 GNU General Public License version 2 notice, the author embarked on an experiment to request the license text via postal mail. This historical detail reflects an era before widespread internet access when physical distribution of software and documentation was common. The experiment successfully yielded a license text delivered by post, though it was version 3, highlighting the evolution of how software licenses are distributed and obtained.
The GPLv2's Analog Address
The GNU General Public License version 2, published in 1991, notably included a physical mailing address for obtaining a copy of the license text. This was a practical necessity at the time, as the internet was not widely accessible, and software was often distributed on physical media like floppy disks or tapes where including the full license text might have been impractical due to storage constraints. Providing a postal address was the most reliable method for users to get the complete terms. The later GPLv3, released in 2007, correctly updated this to include a URL.
The Experiment: Sending a Letter
Driven by this historical context, the author decided to test if the address was still valid by sending a physical letter to the Free Software Foundation at the listed 51 Franklin Street address in Boston. This involved navigating the complexities of international postage from the UK, including dealing with international reply coupons and acquiring US stamps. A handwritten request for the license text was prepared, along with a self-addressed envelope with the necessary return postage.
The Reply: GPLv3 Arrives
Approximately five weeks after sending the letter, a reply arrived from the Free Software Foundation. It contained the full text of a GPL license, printed on US letter-sized paper. However, the license provided was GPL version 3, not the GPL version 2 that the original notice and the author's prompt came from. The author noted that their letter did not specify the version requested but wondered if the context of writing to the GPLv2-listed address should have implied a request for that specific version. Ultimately, satisfied with the outcome of receiving a license text via post, the author chose not to pursue a follow-up request for GPLv2.
TacOS: A Scratch-Built Kernel Running DOOM
TacOS is a new x86_64 UNIX-like operating system kernel built entirely from scratch by a developer known as UnmappedStack. The project gained attention for its ability to run a port of the classic video game DOOM, a common benchmark and meme in hobby OS development.
Core OS Features and Implementation
Written primarily in C and Assembly, TacOS includes fundamental operating system components expected in a UNIX-like system. These include a Virtual File System (VFS), a scheduler, a temporary file system (TempFS), device handling, context switching, virtual memory management using paging, and a physical page frame allocator. The developer confirmed it runs successfully on both real hardware and in the Qemu emulator. The DOOM port specifically utilizes DoomGeneric, a portable version requiring minimal modifications and a custom-built libc provided by TacOS. The creator emphasizes it is a hobby project not intended for production use.
Hacker News Discussion Highlights
The project generated significant positive engagement on Hacker News, with many commenters expressing praise and admiration for the technical achievement of building an OS kernel from scratch and porting DOOM.
Technical curiosity was high. The developer explained running it on a laptop involves booting from a USB ISO. Recommended resources for learning OS development included osdev.wiki, hardware specifications like the Intel Developer Manual, and the project's Discord server.
Discussions on core OS functionality covered multitasking and safety. The developer detailed the use of paging for process isolation and a preemptive round-robin scheduler triggered by a PIT interrupt, explaining the context switching mechanism involving saving/restoring state and switching address spaces.
The difficulty of writing drivers, particularly for modern GPUs, was acknowledged as a major challenge in OS development. The developer noted Qemu's emulated GPU is better documented than real hardware like Nvidia, and some hobby OS projects resort to using Linux drivers for GPU support.
Porting applications was highlighted as non-trivial, even with a libc. The custom libc is minimal, syscalls are not Linux-compatible, and applications often depend on system libraries that need custom implementation or porting.
The choice of DOOM as a milestone was debated. While DOOM can run bare-metal, the developer clarified that running it in userspace with a libc and POSIX-like syscalls is a more complex technical feat, requiring implementation of graphics, user input, and a functional libc, serving as a test for porting third-party software. This led to humorous tangents about the "Can it run DOOM?" meme.
A brief comparison to the GNU HURD project occurred, with some joking about TacOS's progress relative to HURD, while others defended HURD's capabilities. The developer humbly acknowledged not being at HURD's level. The discussion also included a humorous reference to category theory and OS naming.
Instant SQL: Real-Time Querying in DuckDB
MotherDuck and DuckDB have jointly launched "Instant SQL," a new feature available in both the MotherDuck cloud service and the local DuckDB UI. The core idea is to transform the SQL writing workflow by providing real-time feedback, allowing users to see query results update instantly as they type SELECT
statements without needing to manually run the query. This aims to make data exploration and query refinement significantly faster and more fluid.
Key Features and Capabilities
Instant SQL offers several powerful features. It provides real-time result previews, turning query writing into a live data exploration process. Debugging complex queries is simplified with instant CTE inspection, allowing users to click on a CTE and see its intermediate results, with changes propagating instantly. Users can also break down complex expressions within the result table to pinpoint issues. The feature works not only for local DuckDB tables but also for external data sources like Parquet, Postgres, SQLite, MySQL, Iceberg, and Delta Lake. This real-time feedback enables faster query refinement and enhances the use of AI assistance by immediately showing the impact of suggested code.
The Technology Behind Instant SQL
Implementing Instant SQL requires a unique combination of technical capabilities. Crucially, it relies on a fast, local-first engine like DuckDB for low query latency. It also needs sophisticated query rewriting capabilities, such as DuckDB's json_serialize_sql
function, to parse SQL into an Abstract Syntax Tree (AST) and sample data for previews. Intelligent local caching strategies are essential for instant results. Finally, the tool must map the user's cursor position in the editor to the corresponding node in the AST to understand which part of the query needs previewing.
Community Reception and Discussion
The reception on Hacker News was largely enthusiastic, with many users expressing excitement and calling the feature "amazing" and a "killer feature," appreciating the technical ingenuity.
The ability to inspect and debug CTEs in real-time was a particularly popular feature, seen as a significant time-saver. Users were also pleased to confirm the feature's availability in the local DuckDB UI, not just the paid MotherDuck service.
Feature requests included tuning the update frequency to reduce distraction while typing, open-sourcing the UI, and adding support for pipe-based syntax (like PRQL) for improved readability and autocomplete. Users also requested an easier way to preview source tables.
Safety concerns regarding instant execution of write operations were addressed by the developers, who clarified that Instant SQL strictly parses the AST to ensure only SELECT
queries are previewed, preventing DELETE
or DROP
execution.
Discussions also touched on the philosophical debate around SQL syntax ordering ("FROM first") and the technical internals like AST parsing. Some users reported minor bugs with CTE selection, which the authors acknowledged.
CubeCL: Writing GPU Kernels in Rust
CubeCL is a project aiming to bring high-performance GPU kernel development into the Rust ecosystem. It allows developers to write GPU compute kernels directly in Rust code, leveraging Rust's language features for maintainability and flexibility, and uses a procedural macro system to transform annotated Rust functions into GPU kernels.
Multi-Platform Support and Core Features
CubeCL targets multiple GPU runtimes, including WGPU (for cross-platform Vulkan, Metal, DirectX, WebGPU), CUDA for NVIDIA, and ROCm/HIP for AMD, with a JIT CPU runtime using Cranelift also planned. Kernels are written as standard Rust functions annotated with a #[cube]
attribute, supporting many Rust primitives. Key features for optimization and portability include Automatic Vectorization for SIMD instructions, Comptime
for injecting runtime constants and logic into compilation (enabling specialization, loop unrolling), and Autotuning
to benchmark and cache optimal kernel configurations for specific hardware. CubeCL uses a "Cube" topology abstraction to unify concepts like grid dimensions and thread indices across different backends.
Community Discussion and Technical Deep Dive
The project generated significant interest on Hacker News. Commenters requested more complex examples, particularly for AI/ML tasks like matrix multiplication (GEMM). The authors confirmed support for advanced features like warp-level operations ("Plane Operations"), atomics, tensor core instructions, and CUDA's TMA instructions, noting the README is outdated.
Comparisons to other GPGPU tools were frequent, including Halide, lower-level Rust CUDA wrappers, and multi-platform solutions like OpenCL and SYCL. A debate arose regarding OpenCL support and the value of CPU targets for debugging and single-source parallelization. The authors clarified that Metal support is now direct MSL compilation (since v0.5), using WGPU only for the runtime, not shader compilation via Naga.
Future plans mentioned include support for newer data types like FP8 and FP4. The power of Comptime
for specialization and Autotuning
for performance optimization across hardware was reiterated.
Regarding Rust's safety guarantees, an author explained that while kernel launch APIs can prevent memory corruption outside the kernel, the nature of parallel GPU programming means resources within a kernel are inherently shared and mutable between threads.
Mark Zuckerberg: Is Traditional Social Media "Over"?
According to a New Yorker article discussing Mark Zuckerberg's testimony during Meta's antitrust trial, Zuckerberg suggested that social media, as it was originally conceived, is essentially "over." This perspective centers on the fundamental shift in how platforms like Facebook are used compared to their early days.
The Evolution of Social Platforms
The article argues that social media platforms have dramatically changed from their initial purpose of connecting friends and sharing personal updates. Over the past decade, they have evolved to become more like traditional broadcast media outlets. Content designed for mass consumption, such as promotional videos, political commentary, aggregated clips, and AI-generated material, now dominates feeds.
This shift means the "social" aspect, focused on personal connections and updates from friends, has diminished. Personal posts often get lost in the flood of professionally produced or algorithmically amplified content aimed at broad reach. Zuckerberg's argument, made in the context of an antitrust trial, appears strategic, potentially aiming to redefine Meta's market position by suggesting its platforms are no longer solely traditional "social networks."
Cory Doctorow on Facebook's "Carelessness"
Cory Doctorow's review of Sarah Wynn-Williams's memoir, "Careless People," focuses on the central theme of indifference among Facebook's top leadership – Mark Zuckerberg, Sheryl Sandberg, and Joel Kaplan. Doctorow argues that this "carelessness" regarding consequences is a direct outcome of the company achieving excessive market dominance.
Leadership Dysfunction and Global Shift
Wynn-Williams's account portrays Facebook's leadership critically, detailing instances of arrogance, entitlement, incompetence, and misconduct. Anecdotes illustrate their detachment and poor decision-making. Initially focused provincially on the US market, Facebook shifted its attention globally out of necessity to maintain growth stock valuation after saturating the US market, using its highly valued shares to acquire rivals and talent.
Dominance Leads to Indifference
Doctorow posits that Facebook's indifference is not merely a personality trait but a structural consequence of becoming "Too Big to Care." By successfully acquiring competitors (like Instagram and WhatsApp), influencing regulators, and managing its workforce, Facebook became insulated from the market competition, regulatory action, and employee dissent that would typically hold a company accountable.
Policy Failures and the Path Forward
The book details documented abuses enabled by this unchecked power, including building censorship tools used against dissidents, ignoring warnings about the platform's role in the Myanmar genocide, instances of sexual harassment, and lying to the public and advertisers. Doctorow emphasizes that this state of "carelessness" was facilitated by policy failures, specifically weak antitrust enforcement, inadequate privacy protections, and the expansion of IP law enabling lock-in. The conclusion is that while laws cannot force executives to care, they can create consequences that compel them to consider the repercussions of their actions, making regulatory and antitrust efforts necessary tools for reintroducing accountability.
Solving the World's Largest Road-Map TSP: 81,998 South Korean Bars
Researchers from the University of Waterloo and Roskilde University have achieved a significant milestone in computational optimization by solving the largest road-map instance of the Traveling Salesman Problem (TSP) to provable optimality. Their challenge: finding the shortest possible walking tour to visit 81,998 bars across South Korea.
Methodology and the Optimal Route
The researchers used the Open Source Routing Machine (OSRM) to calculate the walking time between every pair of the 81,998 bar locations, generating over 3.3 billion travel times. They then applied sophisticated optimization techniques, combining the LKH heuristic solver to find a good initial tour with the Concorde code's cutting-plane method and branch-and-bound to prove its optimality. The key achievement is not just finding a short route, but mathematically proving it is the absolute shortest possible walking tour. The resulting tour, if walked continuously, would take an estimated 178 days, 1 hour, 56 minutes, and 17 seconds. This project serves as a valuable testbed for developing general-purpose optimization methods applicable to various real-world problems.
Hacker News Community Reactions
The project sparked lively discussion on Hacker News, covering technical aspects, cultural observations, and humor. Many commenters were impressed by the scale of the problem and the computational resources required (44 CPU-years).
The practicality of a 178-day pub crawl generated amusement, with jokes about designated drivers and the dynamic nature of bar openings/closings.
A significant thread debated the definition of "bar" in the dataset, with commenters familiar with South Korea suggesting the list likely includes various establishments licensed to serve alcohol, explaining the high number compared to other regions. This led to discussions comparing urban density and walking culture.
Technically, commenters clarified that the LKH heuristic likely found the tour quickly, while the bulk of the computation was dedicated to the provable optimality using Concorde's exact methods. The concept of "proof" in this context was discussed, as was the connection to P vs NP, clarifying that solving specific large instances doesn't change complexity classes.
The source of the data, the Korean National Police Agency database, added another layer of intrigue. Overall, the discussion highlighted appreciation for the blend of serious mathematical achievement with a quirky, relatable application.
OpenAI Releases gpt-image-1 API
OpenAI has officially released its latest image generation model, known as gpt-image-1 – the same powerful model integrated into ChatGPT – via its API. This release provides developers with direct access to the model's capabilities for integration into their own applications and workflows.
Key Capabilities and Features
Unlike previous models like DALL-E 3, gpt-image-1 is described as having a different architecture, which users report results in significantly better prompt adherence and the ability to accurately render text within images. A major feature is the support for image references, allowing users to provide an input image and prompt the model to generate a new image based on it, enabling tasks like restyling or editing existing photos. The API offers different quality tiers – low, medium, and high – each with varying costs and generation times. Access requires organization verification, a step OpenAI likely implemented to mitigate potential misuse.
Hacker News Discussion and Concerns
The API release generated considerable discussion on Hacker News. A prominent theme involved concerns about content moderation and potential military applications, with some users claiming or hearing about special API tiers with reduced moderation for defense contractors. This sparked debate about AI alignment and potential military use cases, from mundane graphics to generating synthetic data for training computer vision systems.
Another major point was the model's technical quality compared to alternatives. Many users praised gpt-image-1's superior prompt adherence and text generation, seeing it as a significant improvement over diffusion models for certain tasks and potentially replacing complex editing workflows. However, others noted limitations and argued that diffusion models still excel in specific areas or offer more control.
Pricing was also a hot topic, with the high-quality tier seen as expensive, potentially limiting its use in cost-sensitive applications. Speculation arose about whether this pricing reflects the actual compute cost. Beyond military uses, commenters discussed various potential applications, including personalized educational content, marketing materials, game assets, and specialized tools.
Ask HN: Prompts That Stump AI Models
An "Ask HN" thread invited the Hacker News community to share their best examples of AI prompts designed to break, confuse, or reveal the limitations of current large language models (LLMs). The core idea was to collaboratively identify the boundaries of present-day AI capabilities.
The Goal: Understanding AI Limitations
This exercise goes beyond simply tricking an AI; it's a practical way for developers and enthusiasts to understand where current models struggle. Participants aimed to uncover the types of reasoning LLMs fail at, the gaps in their knowledge or logic, and the complex instructions they cannot follow accurately.
Types of Prompts That Challenge AI
Several recurring themes and types of prompts emerged from the discussion. One common approach involves logical paradoxes or self-referential statements that tie models in knots trying to reconcile conflicting constraints. Another category focuses on complex, multi-step reasoning or instructions with subtle nuances, requiring models to track multiple variables or apply conditional logic across several sentences, often leading to nonsensical outputs.
Prompts designed to test ethical boundaries or safety filters were also shared, sometimes finding ways to bypass or reveal inconsistencies in these guardrails. Users also shared prompts exploiting known limitations in factual recall or real-time information, quickly demonstrating where the training data ends. Finally, prompts imposing creative constraints or impossible tasks (like writing without a specific letter) revealed how well models adhere to arbitrary rules or attempt to "hallucinate" solutions.
Analyzing AI Failure Modes
The discussion extended beyond merely sharing prompts to analyzing why these specific examples work. Commenters delved into potential explanations based on underlying model architectures, training data biases, and decoding strategies that might contribute to particular failure modes. This collaborative effort serves as a way to map the edges of current AI intelligence, highlighting that despite their impressive abilities, LLMs still possess significant blind spots and vulnerabilities.
The Universe's "Magic Length": The 21 cm Hydrogen Line
An article from Big Think explores the profound significance of a specific quantum transition in neutral hydrogen atoms, which produces radio waves with a wavelength of approximately 21 centimeters. This phenomenon is described as the universe's "magic length" due to its crucial role in astronomy and cosmology.
The Physics of the 21 cm Transition
The 21 cm radiation originates from a "hyperfine" transition in a neutral hydrogen atom, specifically a spin-flip between the slightly higher energy state where the proton and electron spins are aligned and the lower energy state where they are anti-aligned. Although this transition is "forbidden" by simple quantum rules (meaning it has a very low probability under the electric dipole approximation), it can occur over extremely long timescales (averaging 10 million years) via a weaker magnetic dipole interaction. This spin-flip releases a photon with an incredibly precise wavelength of 21.106114053 centimeters (or 1420 MHz).
Why 21 cm is Crucial for Astronomy
The 21 cm line is an invaluable tool for radio astronomy due to its precise wavelength, the abundance of neutral hydrogen throughout the universe, and the signal's persistence over millions of years. It allows astronomers to map the distribution of neutral hydrogen gas across vast cosmic distances, probe the early universe before the first stars formed by observing redshifted signals, and identify regions of recent star formation where ionized hydrogen re-forms. Observing this signal, particularly from the radio-quiet far side of the Moon, is a key goal for future radio telescopes.
Hacker News Discussion Points
The Hacker News comments section delved into several aspects of the article. Commenters clarified the physics of the "forbidden" transition, explaining it's allowed via the magnetic dipole interaction rather than being strictly forbidden or involving quantum tunneling in the way the article might imply.
A significant thread discussed the use of the 21 cm line on the Pioneer plaques as a universal unit of length for potential alien communication, debating its effectiveness and the universality of the underlying physics concepts required for decoding.
Commenters also noted the article's repeated use of "precisely 21 cm," pointing out the actual value is closer to 21.106 cm and highlighting the importance of precision in scientific language. The counter-intuitive nature of a tiny atom emitting such a long wavelength photon was also discussed. Connections were made to SETI's focus on the 21 cm line and the use of this transition in highly precise Hydrogen Masers for timekeeping.