On the last day of February, fourteen commits landed in rapid succession as Vitia Invenire AI went from initial commit to complete MVP in a single marathon session. The result: a comprehensive Windows 11 supply chain security assessment tool that performs 80 security checks across 19 categories, designed to validate laptops assembled in third-party facilities before they reach an organization’s network.
The tool scans firmware integrity, executable and library signatures, OEM pre-installed software, certificate stores, driver integrity, network configuration, and persistence mechanisms. Findings are severity-scored and rendered in structured JSON and HTML reports, giving security teams a clear picture of whether a machine has been tampered with during manufacturing or transit.
The session’s commit history tells the story of real-world engineering: three commits fixing installer issues, two addressing Windows-specific quirks, followed by a rapid succession of golden image fingerprinting additions that enable baseline comparison across fleet deployments. By the end of the day, the tool could answer the question every supply chain security team asks: “Has this machine been touched?”
Built in Python 3.11+ with Pydantic for data models, Click for CLI, Jinja2 for HTML report templating, and PowerShell deployment scripts, Vitia Invenire bridges the gap between security policy and practical verification.
“Searches for gravitational lenses, galaxy morphology anomalies, stellar distribution anomalies, and emergent patterns.”
— StarPattern_AI README
What happens when you point an evolutionary optimization algorithm at the night sky? StarPattern AI, which debuted in late February with six commits, is finding out. The project combines multi-survey astronomical data acquisition with GPU-accelerated pattern detection and compositional pipeline evolution to search for structures that human astronomers might miss.
The system queries four major surveys—SDSS (Sloan Digital Sky Survey), Gaia, MAST, and ZTF (Zwicky Transient Facility)—and feeds the data through a detection pipeline that hunts for gravitational lenses, galaxy morphology anomalies, stellar distribution patterns, kinematic structures, and time-domain variability signals.
What makes the approach novel is the evolutionary component: rather than hand-tuning detection parameters, the system evolves them through optimization, with LLM integration from four providers (Claude, OpenAI, Gemini, and Grok) offering strategic guidance on parameter selection. A learned meta-detection layer sits atop the classical detectors, combining their outputs into a unified anomaly score.
Three commits in four days refined the detection pipeline and reporting, removed arbitrary limits on object and anomaly detection counts, and improved overall performance. The temporal analysis module was also enhanced, enabling the system to track objects that change brightness over time—a key signature of variable stars, transiting exoplanets, and other astrophysical phenomena.
A new multi-agent OSINT research tool arrived in late February that combines automated technical reconnaissance with cooperative and adversarial AI analysis. Point it at a domain, and it queries 39-plus public sources—DNS records, WHOIS data, SSL certificates, LinkedIn profiles, GitHub repositories, SEC filings, patent databases, job postings, and more.
The raw intelligence then feeds through multi-agent AI workflows where different LLM providers analyze, critique, and refine each other’s assessments. The adversarial layer is key: rather than trusting a single AI’s interpretation, the system pits providers against each other to challenge assumptions and surface blind spots.
Output covers infrastructure analysis, personnel mapping, product discovery, financial indicators, security posture assessment, and corporate structure mapping—a comprehensive intelligence picture assembled from publicly available data.
The FixPlayList project (m3u_resolver) shipped its initial code in February, tackling the perennial frustration of M3U/M3U8 playlists that reference files by one naming convention while the music library uses another. The five-strategy cascade—exact path, filename, stem match, ID3 metadata tags, and fuzzy matching with false-positive protections—handles 15+ audio formats and includes stopword filtering and artist mismatch caps to prevent bad matches.
The multi-AI comparison tool received a February enhancement improving its local GGUF model support through llama-cpp-python. The update ensures that locally-hosted models participate on equal footing with cloud providers in the adversarial evaluation pipeline—important for organizations that need to evaluate on-premises alternatives against commercial APIs.
A new command-line tool for verifying Global Privacy Control compliance arrived at the February/March boundary. CheckGPC checks .well-known/gpc.json endpoints, compares cookie behavior with and without the Sec-GPC header, and detects response differences—a practical tool for CCPA/CPRA compliance auditing.
··· “Frustrating adversaries since the dial-up era” · GitHub: rondilley · 42 Repositories and Counting ···