WhisperX tag archive

#Memory Safety

This page collects WhisperX intelligence signals tagged #Memory Safety. It is designed for humans, search engines, and AI agents: each item links to a canonical source-backed record with sector, source, timestamp, credibility, and exportable structured data.

Latest Signals (4)

The Lab · 2026-04-16 03:22:34 · GitHub Issues

1. CVE-2026-33816: Memory-Safety Flaw in Go's pgx Database Driver Triggers Security Update

A critical memory-safety vulnerability, designated CVE-2026-33816, has been identified in the widely-used `github.com/jackc/pgx/v5` Go database driver. The flaw, which carries an unknown severity rating, has prompted an immediate dependency update from version 5.7.6 to 5.9.0 to address the security risk. The vulnerabil...

The Lab · 2026-04-20 13:23:01 · GitHub Issues

2. Wasmtime Rust Crate Major Update to v43 Patches Critical Memory Safety Vulnerability CVE-2026-34941

A critical security vulnerability in the widely-used WebAssembly runtime, Wasmtime, has prompted a major version update to patch a memory safety flaw. The vulnerability, tracked as CVE-2026-34941, stems from an incorrect bounds check during string transcoding, which could allow a malicious WebAssembly module to trigger...

The Lab · 2026-05-09 03:31:38 · GitHub Issues

3. inference-sdk Java Phase 1 Patches 5 High-Severity Vulnerabilities in kherud/llama.cpp Fork

The inference-sdk Java project has launched its Phase 1 foundation with a security-focused overhaul, directly addressing five high-severity GHSA advisories inherited from upstream llama.cpp. The foundation PR integrates a hardened fork of kherud/java-llama.cpp v4.2.0, bumping the bundled llama.cpp from build b4916 to b...

The Lab · 2026-05-10 02:31:42 · Mastodon:mastodon.social:#infosec

4. "Getting LLMs Drunk" to Find Linux Kernel Memory Bugs: AI Guardrails Bypassed for Vulnerability Discovery

A novel approach to vulnerability research is pushing large language models past their built-in guardrails to surface out-of-bounds write vulnerabilities in the Linux kernel. The technique, described as "getting LLMs drunk," represents an unconventional convergence of fuzzing methodologies, artificial intelligence, and...