DVIUS INTELLIGENCE

Real-Time Cyber Attack Monitoring

THREAT INTELLIGENCE FEED

[ LIVE THREAT DASHBOARD ]

21,650
ACTIVE THREATS
3,390
CRITICAL
4,145
RANSOMWARE
12
SOURCES
DVIUS AI: Advanced Threat Intelligence and Machine Learning Defense
DVIUS AI represents a groundbreaking advancement in cybersecurity threat intelligence. Our proprietary machine learning algorithms analyze global threat data in real-time, identifying patterns and anomalies that traditional security systems often miss. The system processes billions of data points daily, leveraging deep neural networks to provide unprecedented visibility into evolving cyber threats. Recent deployments have demonstrated remarkable effectiveness with 99.7% accuracy in threat detection and a 68% reduction in false positives compared to conventional solutions. The autonomous response capabilities can contain threats within milliseconds, significantly reducing potential damage to enterprise systems. As cyber threats continue to evolve in sophistication, DVIUS AI's adaptive learning capabilities ensure continuous improvement in defensive strategies. The platform represents the future of intelligent, automated cybersecurity defense.
Google gets agent-ready for the Mythos age
In response to Anthropic Mythos, instead of launching another LLM, Google unveiled a broad push toward agentic, AI-driven defense at Google Cloud Next ‘26 to help SOC analysts as they scramble to keep up with the influx of CVEs Mythos threatens. As Mythos promises more vulnerabilities, and reports of unauthorized access despite its limited preview emerge, Google is betting that only agents, not analysts, can keep pace with what is coming. Google unveiled new capabilities focused on automating detection, accelerating response, and securing the increasingly messy intersection of AI, cloud, and third-party ecosystems. Under this, the search giant announced three new agents in Google Security Operations, expanded security across clouds and AI studios with expanded Wiz integration, and the Gemini Enterprise Agent Platform that promises a defense layer against shadow AI. Additionally, Google said it is working on simplifying permissions with modern IAM, along with a handful of improvements in Google Cloud Security. New emphasis on agentic defense Google’s most direct help to SOC teams comes in the form of three new AI agents embedded in Google Security Operations. These include a threat hunting agent, a detection engineering agent, and a third-party context agent. While the threat hunting and detection engineering agents, both now in preview, aim to identify novel attack patterns and close detection gaps, respectively, the third-party context agent, set to enter preview, is designed to enrich investigations with external intelligence. Google claimed its existing triage and investigation agent has already processed over five million alerts, shrinking analysis time from 30 minutes to roughly a minute using Gemini. There’s also a push toward what Google calls “agentic automation,” where response actions can be triggered automatically, paired with new dark web intelligence (infused into Google Threat Intelligence) capabilities to prioritize real threats with high accuracy. Wiz, AI-BOMs, and securing the AI development sprawl Google has expanded its Wiz portfolio to tackle the chaos of AI development and multi-cloud risk. Wiz is being positioned as the connective tissue across environments, supporting everything from AWS and Azure to SaaS platforms and AI agent studios.“Wiz now supports Databricks as well as new agent studios like AWS Agentcore, Gemini Enterprise Agent Platform, Microsoft Azure Copilot Studio, and Salesforce Agentforce, so customers gain visibility however their teams choose to build,” said Francis deSouza, COO, Google Cloud and President, Security Products. Other new capabilities from the integration come in the form of inline scanning of AI-generated code, integrations directly into developer workflows, and an AI-bill of materials (AI-BOM) that inventories all AI components, including models, frameworks, and IDE plugins across an organization. AI-BOM is targeted as a practical response to shadow AI, offering visibility into tools developers use versus what’s approved. Securing the agentic web Google is also aiming to have visibility into the plane where AI agents interact autonomously across systems, something it calls the “agentic web.” To address that, it introduced Agent Identity and Agent Gateway for governance and policy enforcement, alongside deeper integrations for Model Armor to mitigate risks like prompt injection and data leakage. There’s also a reworked approach to bot and fraud detection through Google Cloud Fraud Defense, which aims to distinguish between humans, bots, and AI agents across the workflows.
[Webinar] Mythos Reality Check: Beating Automated Exploitation at AI Speed
Imagine a world where hackers don't sleep, don't take breaks, and find weak spots in your systems instantly. Well, that world is already here. Thanks to AI, attackers are now launching automated, large-scale exploits faster than ever before. The time you have to fix a vulnerability before it gets attacked is shrinking to zero. We call this the Collapsing Exploit Window, and it means your
Can AI Attack the Cloud? Lessons From Building an Autonomous Cloud Offensive Multi-Agent System
Unit 42 reveals how multi-agent AI systems can autonomously attack cloud environments. Learn critical insights and vital lessons for proactive security. The post Can AI Attack the Cloud? Lessons From Building an Autonomous Cloud Offensive Multi-Agent System appeared first on Unit 42.
AIはクラウドを攻撃できるのか?自律型クラウド攻撃型マルチエージェント システムの構築から得られた教訓
Unit 42は、マルチエージェントAIシステムがクラウド環境をどのように自律的に攻撃できるかを明らかにします。プロアクティブなセキュリティのための重要なインサイトと不可欠な教訓を学びます。 The post AIはクラウドを攻撃できるのか?自律型クラウド攻撃型マルチエージェント システムの構築から得られた教訓 appeared first on Unit 42.
Microsoft taps Anthropic’s Mythos to strengthen secure software development
Microsoft plans to integrate Anthropic’s Mythos AI model into its Security Development Lifecycle, a move that suggests advanced generative AI is beginning to play a direct role in how major software vendors identify vulnerabilities and harden code against attack. The company said it will use Mythos Preview, along with other advanced models, as part of a broader push to strengthen secure coding and vulnerability detection earlier in the software development process. The announcement comes as Anthropic’s Mythos heightens concerns that advanced AI models could dramatically shrink the time between finding a software flaw and exploiting it. Analysts say Mythos marks a notable leap in AI-driven vulnerability research, with the ability to uncover thousands of serious flaws across major operating systems and browsers. OpenAI has also entered the space with GPT-5.4-Cyber, a version of its flagship model tailored for defensive cybersecurity work. Keith Prabhu, founder and CEO of Confidis, said a future OpenAI model, which he referred to as “Spud,” could emerge as an even stronger rival. The move matters beyond Microsoft’s own engineering organization. For enterprise security leaders, it offers a clear sign that frontier AI models are starting to move from experimental use into core cybersecurity workflows. That could change how software vendors build products and how defenders view the risks and benefits of using the same AI tools attackers may also exploit. “This marks a seminal turning point in the secure software development lifecycle process,” Prabhu said. “While earlier tools were only capable of static code scanning for vulnerabilities, with AI, there is a possibility of a dynamically learning model which can also perform dynamic vulnerability and even penetration testing in real time.” Over time, Prabhu said, the pressure to adopt AI-assisted security tools is likely to spread beyond the largest software vendors. Why Microsoft’s move matters Neil Shah, vice president for research at Counterpoint Research, said more than 95% of Fortune 500 companies use Microsoft Azure in some capacity, while Azure AI and the Copilot suite are entrenched across about 65% of those companies. Millions of businesses also rely on multiple Microsoft products and cloud services. “Using Mythos in Microsoft’s Security Development Lifecycle could help strengthen and harden products like Windows, Azure, Microsoft 365, and developer tools,” Shah said. “Every enterprise running those products could benefit from the security improvement without needing direct Mythos access themselves.” Prabhu noted that Microsoft said it had evaluated Mythos using its open-source benchmark for real-world detection engineering tasks, with results showing substantial improvements over prior models. “Such a claim coming from Microsoft does suggest that these new AI models are becoming materially better at identifying exploitable flaws than earlier generations,” Prabhu added. “However, as with any AI tool, the strength of the tool lies in its ability to analyze code quickly based on past learning. There is a possibility that it could miss new types of vulnerabilities that only a ‘human-in-the-loop’ could identify.”
China-Linked GopherWhisper Infects 12 Mongolian Government Systems with Go Backdoors
Mongolian governmental institutions have emerged as the target of a previously undocumented China-aligned advanced persistent threat (APT) group tracked as GopherWhisper. "The group wields a wide array of tools mostly written in Go, using injectors and loaders to deploy and execute various backdoors in its arsenal," Slovakian cybersecurity company ESET said in a report shared with The Hacker
Vercel Finds More Compromised Accounts in Context.ai-Linked Breach
Vercel on Wednesday revealed that it has identified an additional set of customer accounts that were compromised as part of a security incident that enabled unauthorized access to its internal systems. The company said it made the discovery after expanding its investigation to include an extra set of compromise indicators, alongside a review of requests to the Vercel network and environment
Apple Fixes iOS Flaw That Let FBI Recover Deleted Signal Messages
Apple has rolled out a software fix for iOS and iPadOS to address a Notification Services flaw that stored notifications marked for deletion on the device. The vulnerability, tracked as CVE-2026-28950 (CVSS score: N/A), has been described as a logging issue that has been addressed with improved data redaction. "Notifications marked for deletion could be unexpectedly retained on the device,"
CNAPP – ein Kaufratgeber
Gorodenkoff | shutterstock.com Cloud Security bleibt ein diffiziles Thema und die Tools, mit denen sie sich gewährleisten lässt, werden zunehmend komplexer und schwieriger zu durchschauen – auch dank der ungebrochenen Liebe der Branche zu Akronymen. Mit CNAPP kommt nun ein weiteres hinzu. CNAPP – Definition Die Abkürzung steht für Cloud-Native Application Protection Platform – und kombiniert die Funktionen von vier separaten Cloud-Security-Werkzeugen:    Cloud Infrastructure Entitlement Management (CIEM), um sämtliche Zugriffskontrollmaßnahmen und Risikomanagement-Tasks zu managen. Cloud Workload Protection Platform (CWPP), um Code in allen cloudbasierten Repositories abzusichern sowie Laufzeitschutz für die gesamte Entwicklungsumgebung und alle Code-Pipelines zu gewährleisten. Cloud Access Security Broker (CASB) für Authentifizierungs- und Encryption-Aufgaben. Cloud Security Posture Management (CSPM), das Threat Intelligence und Abhilfemaßnahmen kombiniert. Über diese vier „klassischen“ Elemente hat sich CNAPP inzwischen auch auf andere Bereiche ausgeweitet. Zum Beispiel: API-, Skript-, Supply-Chain– sowie Infrastructure-as-Code (IaC)-Sicherheit, Container– und Serverless-Security, sowie weitere Posture-Management-Tools, einschließlich Daten- und SaaS-Applikationen. Aus Anwendersicht ist CNAPP damit sowohl schwer zu verstehen als auch diffizil zu evaluieren – und entsprechend schwer einzukaufen, wie Forrester-Chefanalyst Andras Cser in einem Blogbeitrag zum Thema nahelegt. Weil teilweise auch Security-Optionen außerhalb der Cloud abgedeckt würden, sei jede CNAPP-Kaufentscheidung und -Implementierung auch eine Team- oder abteilungsübergreifende Aufgabe, so der Analyst. Anders ausgedrückt: Geht‘s um CNAPP, muss eine ganze Menge Software abgestimmt, gemanagt, integriert und verstanden werden. Um Ihnen den Überblick zu erleichtern, haben wir die Details zu den wichtigsten Anbietern und Angeboten in diesem Kaufratgeber zusammengetragen. Der CNAPP-Markt Geprägt hat die Produktkategorie – beziehungsweise das Akronym – einmal mehr Gartner. Das Analystenhaus verwendete den Begriff CNAPP erstmals in seinem „Innovation Insight“-Report aus dem August 2021. Der Schlüssel zum Verständnis dieser Produktkategorie liegt in den Integrationsherausforderungen für Unternehmensanwender: Im „State of Observability Report“ von VMware geben 57 Prozent der Befragten an, dass innerhalb einer typischen Cloud-Anwendung bis zu 50 verschiedene Technologien zum Einsatz kommen – die im Schnitt mit zehn Monitoring-Tools gemanagt werden. Und laut dem „Observability Report 2024“ (Download gegen Daten) von Dynatrace besteht eine typische Enterprise-Umgebung im Schnitt aus einem Dutzend unterschiedlichen Cloud-Plattformen, wobei regelmäßig ein Mix aus Private-, Public- und Hybrid-Cloud-Strategien zur Anwendung kommt. Hinzu kommen dann noch verschiedene Instanzen virtueller Maschinen, Kubernetes-Container sowie Serverless- und Microservices-Tools. Diese erhebliche Integrationsbelastung könnte auch ein Grund dafür sein, dass der CNAPP-Markt im zweiten Quartal 2024 ein Gesamtvolumen von 700 Millionen Dollar erreicht hat und damit im Jahresvergleich um 42 Prozent gewachsen ist – wie die Analysten der Dell’Oro Group berichten. CNAPP-Anbieter und ihre Angebote Im Idealfall sollte eine CNAPP-Lösung: Fehlkonfigurationen reduzieren, das Security-Niveau der Entwicklungspipeline optimieren, sowie effektiv automatisieren. Die Anbieter verfolgen mit Blick auf CNAPP zwei unterschiedliche Ansätze: Entweder sie fokussieren die DevSecOps– oder die traditionelle IT-Security-Perspektive. Ersteres hat einen stärkeren Fokus auf den Schutz der Applikationen selbst zur Folge (CIEM/CWPP), letzteres eine Ausweitung traditioneller Schutzmaßnahmen auf Netzwerkebene (CASB/CSPM). Bislang deckt kein CNAPP-Offering wirklich konsequent alle vier Bereiche ab. Natürlich spielt künstliche Intelligenz (KI) auch in diesem Bereich zunehmend eine Rolle: Diverse CNAPP-Anbieter integrieren, beziehungsweise kombinieren KI-Agenten und agentenlose Lösungen in ihren Produkten, um ein umfassenderes Monitoring und eine möglichst breite Abdeckung und Scalability zu bieten.  Aqua Security Platform Fokus: DevSecOps Form: Einheitliche Plattform mit verschiedenen Produkten; Besondere Features/Integrationen: „(No-)Breach-Garantie“ bis zu einer Million Dollar; Preisgefüge: kostenlose Trial-Version; ab 850 Dollar pro Monat; CrowdStrike Falcon Cloud Security Fokus: DevSecOps / IT-Security Form: Einheitliche Plattform mit verschiedenen Produkten; Besondere Features/Integrationen: Cloud Detection and Response (CDR), AppSec, Schwachstellenanalyse für Container-Images; Preisgefüge: Abonnement-Preis richtet sich nach den gewählten Produkten; Data Theorem Fokus: DevSecOps Form: Separate Produkte für Cloud, Web und Supply Chain; Besondere Features/Integrationen: Headliner Attack Policies, Artifact Scanning, zentrale Analyse-Engine, Kubernetes-Support; Preisgefüge: komplex und teuer; unterschiedliche Tarife für jedes Produkt; Lacework FortiCNAPP Fokus: IT-Security Form: Einheitliche Plattform mit verschiedenen Produkten; Besondere Features/Integrationen: Verhaltensbasierte Schutzregeln, SOAR, AppSec, Scans für Build- und Deployment-Pipelines; Preisgefüge: kostenlose Probeversion; richtet sich nach der Nutzungsdauer sowie den in Anspruch genommenen vCPUs; Orca CNAPP Fokus: IT-Security Form: Einheitliche Plattform mit verschiedenen Produkten; Besondere Features/Integrationen: Side Scanning, Risikopriorisierung, AppSec-Pipelines, KI-Features; Preisgefüge: orientiert sich an Workloads, Storage Buckets und Datenbank-Scans sowie den eingesetzten Sensoren; Palo Alto Networks Cortex Cloud Fokus: IT-Security Form: Einheitliche Plattform mit verschiedenen Produkten; Besondere Features/Integrationen: CDR, AppSec-Integration, Laufzeitschutz und DSPM, Support für IBM und Akamai Clouds geplant; Preisgefüge: komplex und teuer; richtet sich nach den gewählten Modulen und abgesicherten Workloads; Qualys Total Cloud CNAPP Fokus: IT-Security Form: Einheitliche Plattform; Besondere Features/Integrationen: CDR, Container und IaC-Security, SaaS Posture Management, KI-Funktionen; Preisgefüge: kostenlose Probeversion; Abo-Modell auf Workload-Basis; Sysdig Secure Fokus: DevSecOps Form: Einzelprodukt; Besondere Features/Integrationen: „Next Generation“ CDR, Risikopriorisierung, KI-Funktionen und-Analysen; Preisgefüge: Festpreis pro Host Model; ab circa 500 Dollar pro Monat; Tenable Cloud Security Fokus: IT-Security Form: Standalone-Lösung oder als Bestandteil der Exposure-Management-Plattform Tenable One; Besondere Features/Integrationen: Exposure Management, DSPM, KI Security, Kubernetes- und IaC-Support; Preisgefüge: kostenlose Probeversion; komplexes Preismodell, das sich an Nodes oder Workloads ausrichten lässt;  Tigera Calico Cloud Fokus: DevSecOps Form: Einzelprodukt; Besondere Features/Integrationen: fokussiert in erster Linie auf Container- und Kubernetes-Security; Preisgefüge: kostenlose Open-Source-Version; kommerzielle Optionen mit Abo-Abrechnungsmodell oder pro Node-Stunde; Uptycs Fokus: IT-Security Form: Einheitliche Plattform; Besondere Features/Integrationen: XDR, AppSec, DSPM, KI- und ML-Funktionen; Preisgefüge: diverse Optionen; ab circa 5.000 Dollar pro Jahr (200 Cloud Assets); Wiz Fokus: IT-Security Form: Einheitliche Plattform mit verschiedenen Produkten; Besondere Features/Integrationen: Risikopriorisierung mit Graph-basierten Visualisierungen und Analysen von Code zu Cloud zu Runtime, KI-Funktionen, Container- und Kubernetes-Support; Preisgefüge: verschiedene Preispläne, die sich nach den Workloads richten; 5 Fragen vor dem CNAPP-Investment Bevor Sie sich für einen dieser CNAPP-Anbieter entscheiden, sollten Sie sich folgende Fragen stellen: Welche Cloud-Artefakte lassen sich mit der gewählten Lösung scannen? Einige Produkte (Lacework) fokussieren auf die drei großen IaaS-Anbieter, andere (Tigera) unterstützen nur die Kubernetes-Dienste der Hyperscaler. Wieder andere (Sysdig) nehmen vor allem Container und die verschiedenen Linux-Server, auf denen diese laufen, in den Fokus. Vor allem kommt es jedoch darauf an, die Artefakte kontinuierlich und (nahezu) in Echtzeit überwachen zu können. Wie werden Sicherheitsvorfälle gemeldet? Gibt es separate Zugriffsregeln, damit sich verschiedene Mitarbeiter auf bestimmte Bereiche konzentrieren können? Gibt es separate oder kombinierte, vordefinierte Sicherheitsrichtlinien, um Daten mit und ohne Agenten zu erfassen? Wie aussagekräftig sind die Dashboards und die Visualisierungen, die diese liefern? Inwieweit werden die vier Management-Tool-Bereiche abgedeckt? Einige Angebote bieten CWPP- und CSPM-Elemente, müssen aber, etwa für Kubernetes-Support, erweitert werden. Welche DevOps-Frameworks werden unterstützt? Wie sieht es mit Blick auf Open-Source-Repositories aus? Wie viel kostet die Lösung konkret? Nur wenige CNAPP-Anbieter bieten eine wirklich transparente Preisgestaltung. Insbesondere bei komplexen Preismodellen (Data Theorem, Qualys, Orca) besteht deshalb Klärungsbedarf. (fm)
Riddled with flaws, serial-to-Ethernet converters endanger critical infrastructure
Serial-to-Ethernet adapters used in industrial, retail, and healthcare environments to link serial devices to TCP/IP networks are riddled with vulnerabilities and outdated open-source components, researchers warn. The flaws enable various attacks scenarios, including taking full control of mission-critical equipment such as remote terminal units, programmable logic controllers, point-of-sale systems, and bedside patient monitors. In a new study dubbed BRIDGE:BREAK, researchers from cybersecurity firm Forescout analyzed the firmware from five major vendors of serial-to-IP converters and found that each firmware image contained on average 80 open-source software components with almost 2,500 known vulnerabilities in them and 89 publicly available exploits. In addition, the researchers identified 22 new vulnerabilities in three devices from Lantronix and Silex Technology America with impact ranging from remote code execution to authentication bypass, information disclosure, and denial-of-service. Search engines such as Shodan show close to 20,000 internet-exposed serial-to-Ethernet converters, though the number of such devices deployed within networks is likely in the millions, as they are used across many industries. But even when they are not directly connected to the internet, attackers can still reach such devices after breaking into internal networks through a variety of other initial access vectors. Because serial protocols often lack authentication or encryption “attackers may alter serial data received from a sensor as it moves into the IP network,” the researchers said. “For example, changing temperature, pressure, humidity, flow, patient heart rate readings to arbitrary values. Conversely, attackers may modify commands traveling from the IP network to the serial side before they reach an actuator. For example, changing the speed or direction of a servo motor.” Serial-to-IP converters have been targeted in real-world attacks against critical infrastructure in the past. For example, in a 2015 cyberattack that disrupted power distribution at several power substations in Ukraine, attackers loaded corrupted firmware onto Moxa serial-to-IP converters via the firmware update function. Then just a few months ago in December, wind and solar farms in Poland were targeted by Russian hackers in a cyberattack that involved resetting the configurations on Moxa NPort serial device servers. The devices were not directly exposed to the internet, but attackers gained access to them after compromising VPN concentrators. Vulnerable components and lack of firmware hardening Firmware in devices analyzed by Forescout was running old versions of the Linux kernel as well as other outdated libraries and userspace binaries. In addition, half of the Linux kernel branches observed reached end of life, complicating future updates. As a result, analyzed firmware images had more than 2,000 known vulnerabilities on average, most located in the Linux kernel itself. The firmware image with the lowest number of flaws still had 210 vulnerabilities. Of course, not all flaws are equal, but on average 68% were low or medium severity, 29% were high severity, and 3% were critical severity. Because of the old kernel versions used, the anti-exploit mitigations applied at the OS level for binaries were also highly inconsistent. Only 23% of firmware images used stack canaries, a feature that prevents stack smashing exploits; 44% used RELRO (Relocation Read-Only), which prevents attackers from redirecting execution by overriding the Global Offset Table; 67% used PIE (Position Independent Executable), a mechanism that makes Return Oriented Programming (ROP) attacks much harder; and 84% used NX (No-eXecute bit), a feature that marks certain memory stack and heap areas as non-executable to prevent straightforward buffer overflow exploits. New RCE and other vulnerabilities Aside from all the known vulnerabilities from open-source components, the Forescout researchers also performed manual security analysis and identified previously unknown flaws in the firmware of three specific devices from two vendors: Lantronix EDS3000PS Series, Lantronix EDS5000 Series, and Silex SD330-AC. The web-based management interface of the Lantronix EDS5000 had five flaws in multiple pages and fields caused by missing input sanitization that could lead to remote code execution as root. The Lantronix EDS3000PS had one RCE, an authentication bypass issue and a device takeover flaw where the password change feature did not ask for the old password, potentially allowing attackers to change the password for the administrator account. While the Lantronix flaws were all in the web interface, some of the 12 vulnerabilities found in the Silex SD-330AC were in various network services, exploitable via UDP packets. In total the researchers found three new RCE flaws, an authentication bypass, an arbitrary file upload issue that could allow unauthenticated attackers to upload firmware binaries, two device takeover and privilege escalation bugs, two configuration tampering flaws, and other issues that could lead to information disclosure and denial-of-service. In addition, the researchers found that the firmware signing key may be obtainable by attackers, which could give them the ability to create malicious firmware images. Silex is in the process of remediating this issue. Mitigation “As these devices are increasingly deployed to connect legacy serial equipment to IP networks, vendors and end-users should treat their security implications as a core operational requirement,” the Forescout researchers said. Both Lantronix and Silex already released firmware updates to address the reported flaws: SD-330AC Firmware version 1.50, EDS5000 series version 2.2.0.0R1, and EDS3000 series version 3.2.0.0R2. In addition to patching, Forescout recommends: Replacing default credentials and prohibiting weak passwords to reduce the risk of exploiting authenticated vulnerabilities Segmenting networks to prevent threat actors from reaching vulnerable serial-to-IP converters or using those devices to compromise other critical assets Ensuring they are not exposed to the internet Implementing strict access controls for management interfaces (such as the Web UI) so only preapproved management workstations can access them Using dedicated subnetworks or VLANs where they are only allowed to communicate with the serial devices they manage and the IP-side devices that should have access to that serial data Monitoring for exploitation attempts on serial-to-IP converters and for unusual communication patterns that suggest an attacker is targeting data read from, or sent to, the serial link
Claude Mythos signals a new era in AI-driven security, finding 271 flaws in Firefox
The Claude Mythos Preview appears to be living up to the hype, at least from a cybersecurity standpoint. The model, which Anthropic rolled out to a small group of users, including Firefox developer Mozilla, earlier this month, has discovered 271 vulnerabilities in version 148 of the browser. All have been fixed in this week’s release of Firefox 150, Mozilla emphasized. These findings set a new precedent in AI’s ability to unearth bugs, and could turbocharge cybersecurity efforts. “Nothing Mythos found couldn’t have been found by a skilled human,” said David Shipley of Beauceron Security. “The AI is not finding a new class of AI-exclusive super bugs. It’s just finding a lot of stuff that was missed.” However, the news comes as Anthropic is reportedly investigating unauthorized use of Mythos by a small group who reportedly gained access via a third party vendor environment, revealing the double-edged nature of AI. Closing the fuzzing gap Firefox has previously pointed AI tools, notably Anthropic’s Claude Opus 4.6, at its browser in a quest for vulnerabilities, but Opus discovered just 22 security-sensitive bugs in Firefox 148, while Mythos uncovered more than ten times that many. Firefox CTO Bobby Holley described the sense of “vertigo” his team felt when they saw that number. “For a hardened target, just one such bug would have been red-alert in 2025,” he wrote in a blog post, “and so many at once makes you stop to wonder whether it’s even possible to keep up.” Firefox uses a defense-in-depth strategy, with internal red teams applying multiple layers of “overlapping defenses” and automated analysis techniques, he explained. Teams run each website in a separate process sandbox. However, no layer is impenetrable, Holley noted, and attackers combine bugs in the rendering code with bugs in the sandboxes in an attempt to gain privileged access. While his team has now adopted a more secure programming language, Rust, the developers can’t afford to stop and rewrite the decades’ worth of existing C++ code, “especially since Rust only mitigates certain, (very common) classes of vulnerabilities.” While automated analysis techniques like fuzzing, which uncovers vulnerabilities or bugs in source code, are useful, some bits of code are more difficult to fuzz than others, “leading to uneven coverage,” Holley pointed out. Human teams can find bugs that AI can’t by reasoning through source code, but this is time-consuming, and is bottlenecked due to limited human resources. Now, Claude Mythos Preview is closing this gap, detecting bugs that fuzzing doesn’t surface. “Computers were completely incapable of doing this a few months ago, and now they excel at it,” Holley noted. Mythos Preview is “every bit as capable” as human researchers, he asserted, and there is no “category or complexity” of vulnerability that humans can find that Mythos can’t. Defenders now able to win ‘decisively’? Gaps between human-discoverable and AI-discoverable bugs favor attackers, who can afford to concentrate months of human effort to find just one bug they can exploit, Holley noted. Closing this gap with AI can help defenders erode that long-term advantage. The industry has largely been fighting security “to a draw,” he acknowledged, and security has been “offensively-dominant” due to the size of the attack surface, giving adversaries an “asymmetric advantage.” In the face of this, both Mozilla and security vendors have “long quietly acknowledged” that bringing exploits to zero was “unrealistic.” But now with Mythos (and likely subsequent models), defenders have a chance to win, “decisively,” Holley asserted. “The defects are finite, and we are entering a world where we can finally find them all.” What security teams should do now Finding 271 flaws in a mature codebase like Firefox illustrates the fact that AI-driven vulnerability discovery is now operating at a scale and depth that can outpace traditional human-led review, noted Ensar Seker, CISO at cyber threat intelligence company SOCRadar. Holley’s “vertigo,” he said, was because defenders are realizing the attack surface is larger, and “more rapidly discoverable than previously assumed.” Security teams must respond by shifting from periodic testing to continuous validation, Seker advised. That means integrating AI-assisted code analysis into continuous integration/continuous delivery (CI/CD) pipelines, prioritizing “patch velocity over perfection,” and assuming that any externally reachable code path will eventually be discovered and weaponized. “The goal is no longer just finding vulnerabilities first, but reducing the window between discovery and remediation,” he said. Shipley agreed that any company building software must evaluate resourcing so it can quickly and proactively find and fix vulnerabilities. “But stuff will happen,” he acknowledged. So, in addition to doing proactive work, enterprises must regularly exercise their incident response playbooks. “The next few years are going to be a marathon, not a sprint,” said Shipley. Dual-use nature of AI is a challenge However, the dual-use nature of these systems present a big challenge. The same capability that helps defenders identify hundreds of flaws can be turned against them if the model or its outputs are exposed, Seker pointed out. The reported unauthorized access to Mythos “reinforces that AI systems themselves are now high-value targets, effectively becoming part of the attack surface,” he said. It’s not at all surprising that people found a way to access Mythos, Shipley agreed; it was inevitable. “Nor does Anthropic have some unique, insurmountable or exclusive AI capability for hacking,” he said, pointing out that OpenAI is already catching up in that regard, and others will “catch and surpass” Mythos. Striking a balance requires treating AI models like privileged infrastructure, Seker noted. Enterprises need strict access controls, output monitoring, and isolation of sensitive workflows. Developers, meanwhile, must adapt by writing code that is resilient to automated scrutiny; this requires stronger input validation, safer defaults, and “fewer assumptions about obscurity.” “In this paradigm, security isn’t just about defending systems; it’s about defending the tools that are now capable of breaking them at scale,” Seker emphasized.
Malicious pgserve, automagik developer tools found in npm registry
Application developers are being warned that malicious versions of pgserve, an embedded PostgreSQL server for application development, and automagik, an AI coding tool, have been dropped into the npm JavaScript registry, where they could poison developers’ computers. Downloading and using these versions will lead to the theft of data, tokens, SSH keys, credentials, including those for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP), crypto coins from browser wallets, and browser passwords. The malware also spreads to other connected PCs. The warnings came this week from researchers at two security firms. Researchers at Socket found fake packages aimed at app developers looking for pgserve, an embedded PostgreSQL server for application development and testing, and automagik, an AI coding and agent-orchestration CLI from Namastex.ai. The researchers said the attack contains similarities to a recent campaign dubbed CanisterWorm, a worm-enabled supply chain attack that replaced the contents of legitimate packages with malware on npm. At the time of Socket’s review, the fake automagik/genie package showed 6,744 weekly downloads, and the fake pgserve package showed about 1,300 weekly downloads. The phony versions of automagik were versions 4.260421.33 through 4.260421.39 when Socket posted its advisory, and additional malicious versions are still being published and identified. The full scope of affected releases, maintainers, or release-path compromise is still under investigation, the researchers said. Separately, researchers at StepSecurity also found malicious versions of pgserve on npm, noting that the compromised versions (1.1.11, 1.1.12 and 1.1.13) inject a 1,143-line credential-harvesting script that runs via postinstall every time it is installed. The last legitimate release of pgserve is v1.1.10, according to StepSecurity. StepSecurity said that, unlike simple infostealers, this malware is a supply-chain worm: If it finds an npm publish token on the victim machine, it re-injects itself into every package that token can publish, further propagating the compromise. Stolen data is encrypted and exfiltrated to a decentralized Internet Computer Protocol (ICP) canister, a blockchain-hosted compute endpoint chosen specifically because it cannot be taken down by law enforcement or domain seizure. Yet another supply chain attack This is just the latest example of a software supply chain attack, in which threat actors hope that developers will download infected utilities and tools from an open source registry and use them in packages that will spread the malware widely. In one of the most recent examples, hackers last month compromised the npm account of the lead maintainer of the Axios HTTP client library. And last summer, attackers compromised several JavaScript testing utilities on npm. Advice to victimized developers Developers who have downloaded the malicious versions of pgserver and automagik need to act fast, says Tanya Janca, head of Canadian secure coding consultancy SheHacksPurple. “Rotate every credential you can think of, right now, before you do anything else,” she said. “Then harden your CI/CD network egress controls so your build runners can only reach the domains they explicitly need. Make sure your build runners and deployment runners use separate service accounts with separate permissions. The goal is to make sure that even if a malicious package runs in your build environment, it cannot reach an attacker’s infrastructure (for data and secret exfiltration) and also block it from pivoting into your deployment pipeline.” To prevent being compromised by any malicious npm package, Janca said IT leaders should disable automatic postinstall script execution by default. Developers should also run this command immediately: npm config set ignore-scripts true. Some legitimate packages will occasionally break as a result of this, she admitted. But the goal is to create an intentional point of friction to force developers to consciously decide a script is or is not allowed to run on their machines. In addition, she said, developers need tooling that checks whether what is published to npm actually matches what is in the source repository. “Not all software composition analysis tools do this,” Janca said, “so ask your vendor specifically whether the tool catches registry-to-repo mismatches.” Finally, she advised, apply the principle of least privilege access to publishing tokens; scope them tightly, give them only the permissions they need for one specific package, and rotate them regularly — automatically, not manually. More than just credential theft “People tend to think of this as a credential theft incident,” Janca said. “It is actually a potential complete organizational takeover, and it can unfold in stages. First, the attacker gets your secrets on install: AWS keys, GitHub tokens, SSH keys, database passwords, everything sitting in your environment or home directory. Second, if you have an npm publish token, the worm immediately uses it to inject itself into every package you can publish, which means your downstream users are now also victims. Third, those stolen cloud credentials get used to pivot into your infrastructure: spinning up resources, exfiltrating data, moving laterally across accounts. Fourth, your CI/CD pipelines, which trust your runners and service accounts implicitly, welcomes the attackers malicious code into production.” She pointed out that it often takes a long time for developers to notice attacks like this, “and by that time, the attacker has potentially had access to source code, production systems, customer data, and the software your users count on.” Shift in tactics Janet Worthington, a senior security and risk analyst at Forrester Research, said that recent attacks such as the CanisterSprawl campaign and the compromise of the Namastex.ai npm packages show a shift from threat actors toward self-propagating malware that steals credentials and uses them to automatically infect other packages. “This behavior echoes earlier outbreaks like the Shai-Hulud worm, which spread across hundreds of packages by harvesting npm tokens and republishing trojanized versions belonging to the compromised maintainer,” she said in an email. While open registry platforms like npm are introducing stronger protections around publisher accounts and tokens, these incidents highlight the fact that compromises are no longer isolated to a single malicious package, she said. Instead, they cascade quickly through a registry ecosystem and even jump to other ecosystems. “Enterprises should ensure that only vetted open source and third party components are utilized by maintaining curated registries, automating SCA [software composition analysis] in pipelines and utilizing dependency firewalls to limit exposure and blast radius,” said Worthington. Developers sit at the intersection of source code, cloud infrastructure, CI/CD pipelines, and publishing credentials, Janca pointed out, so compromising one developer can mean compromising every user of every package they maintain, or even an entire organization. This attack, and several others in recent months, are also going after personal crypto wallets alongside corporate credentials. “That tells us,” she said, “that attackers understand exactly the type of person they are hitting and they are optimizing for maximum yield from a single attack.” This article originally appeared on InfoWorld.
Microsoft issues out-of-band patch for critical security flaw in update to ASP.NET Core
Developers are advised to check their applications after Microsoft revealed that last week’s ASP.NET Core update inadvertently introduced a serious security flaw into the web framework’s Data Protection Library. Microsoft describes the issue as a “regression,” coding jargon for an update that breaks something that was previously working correctly. In this case, what was introduced was a CVSS 9.1-rated critical vulnerability, identified as CVE-2026-40372, that affects ASP.NET’s Core Data Protection application library distributed via the NuGet package manager. It impacts Linux, macOS and other non-Windows OSes, as well as Windows systems where the developer explicitly opted into managed algorithms via the UseCustomCryptographicAlgorithms API. A bug in the .NET 10.0.6 package, released as part of the Patch Tuesday updates on April 14, causes the ManagedAuthenticatedEncryptor library to compute the validation tag for the Hash-based Message Authentication Code (HMAC) using an incorrect offset. Incorrect calculation of security hashes results in the .AspNetCore application cookies and tokens being validated and trusted when they shouldn’t be. “In these cases, the broken validation could allow an attacker to forge payloads that pass DataProtection’s authenticity checks, and to decrypt previously-protected payloads in auth cookies, anti-forgery tokens, TempData, OIDC state, etc,” said Microsoft’s GitHub advisory. When embedded in applications, these long-lived tokens confer the sort of power attackers quickly jump on. “If an attacker used forged payloads to authenticate as a privileged user during the vulnerable window, they may have induced the application to issue legitimately-signed tokens (session refresh, API key, password reset link, etc.) to themselves,” the advisory noted. This vulnerability arrives only six months after ASP.NET suffered one of its worst ever flaws, October’s CVSS 9.9-rated CVE-2025-55315 in the Kestrel web server component. But somewhat alarmingly, the current advisory goes on to compare the issue to MS10-070, an emergency patch for CVE-2010-3332, an infamous zero-day vulnerability in the way Windows ASP.NET handled cryptographic errors that caused a degree of panic in 2010. Not a simple update Normally, when flaws are uncovered, the drill involves merely applying an update, workaround, or mitigation. In this case, the update itself should have already happened automatically for server builds, taking runtimes to the patched version 10.0.7. However, for developers using the popular Docker container platform, things are more complicated. For those projects, the Data Protection Library is also embedded in built applications. Addressing this requires updating and rebuilding any ASP.NET Core applications created after the April 14 update. In addition, those using 10.0.x on the netstandard2.0 or net462 target framework asset from the flawed NuGet package, for compatibility with older operating systems including Windows, are also affected. Detecting affected binaries How will developers know if a vulnerable binary has been loaded? Microsoft’s security advisory offers the following advice: “Check application logs. The clearest symptom is users being logged out and repeated The payload was invalid errors in your logs after upgrading to 10.0.6. Check your project file. Look for a PackageReference to Microsoft.AspNetCore.DataProtection version 10.0.6 in your .csproj file (or in a package that depends on it). You can also run dotnet list package to see resolved package versions.” In summary, developers should rebuild affected applications to apply the fixed version, expire all affected authentication cookies and tokens to remove forgeries, and rotate to apply new ASP.NET Core Data Protection tokens. While there is no evidence that the issue has been exploited by attackers, good security hygiene mandates also checking for unexpected or unusual logins failures, errors, or authentication failures, Microsoft advised.
Malicious KICS Docker Images and VS Code Extensions Hit Checkmarx Supply Chain
Cybersecurity researchers have warned of malicious images pushed to the official "checkmarx/kics" Docker Hub repository. In an alert published today, software supply chain security company Socket revealed that unknown threat actors managed to have overwritten existing tags, including v2.1.20 and alpine, while also introducing a new v2.1.21 tag that does not correspond to an official release. The
Self-Propagating Supply Chain Worm Hijacks npm Packages to Steal Developer Tokens
Cybersecurity researchers have flagged a fresh set of packages that have been compromised by bad actors to deliver a self-propagating worm that spreads through stolen developer npm tokens. The supply chain worm has been detected by both Socket and StepSecurity, with the companies tracking the activity under the name CanisterSprawl owing to the use of an ICP canister to exfiltrate the stolen data
Harvester Deploys Linux GoGra Backdoor in South Asia Using Microsoft Graph API
The threat actor known as Harvester has been attributed to a new Linux version of its GoGra backdoor deployed as part of attacks likely targeting entities in South Asia. "The malware uses the legitimate Microsoft Graph API and Outlook mailboxes as a covert command-and-control (C2) channel, allowing it to bypass traditional perimeter network defenses," the Symantec and Carbon Black Threat Hunter
NFC tap-to-pay gets tapped by hackers
Cyber crooks are abusing a trojanized Android payment application to steal near field communication (NFC) data and PINs, enabling cloning of payment cards and draining victim accounts. According to ESET researchers, a new variant of the NGate malware has been infused into the HandyPay NFC-relay application to transfer NFC data to the attacker’s device and use it for contactless ATM cash-outs. Use of AI is suspected in the campaign. “To trojanize HandyPay, threat actors most probably used GenAI, indicated by emoji left in the logs that are typical of AI-generated text,“ the researchers said in a blog post. The campaign has been distributing two malware samples, through a fake lottery website and a fake Google Play website, in attacks targeting Android users in Brazil since November 2025. Legit app doing the dirty work ESET researchers pointed out that the campaign marks NGate operators shifting from custom tooling to a trojanized legitimate application. HandyPay, originally designed to relay NFC data between devices, is being used to require minimal permissions and blend into expected payment workflows. This approach avoids building custom tooling from scratch, previously seen with the NFCGate abuse, and instead adds malicious code into an existing NFC-capable app. By repurposing an NFC relay app, the attackers inherit functionality that already handles the core data exchange, the researchers noted.An NFC-relay app is a tool that captures contactless communication from a card or device and forwards it in real time to another device, extending the short-range Near Field Communication signal over a network for remote use. Because the app operates within expected NFC workflows, it is easier for attackers to mask the attack. The distribution channels include a fake lottery site impersonating Brazil’s “Rio de Premios,” and a spoofed Google Play page advertising a “card protection” tool. AI was likely used ESET researchers also spotted something unusual in the malware’s internals. Some traces suggested generative AI may have played a role in its development. Specifically, the injected malicious code contains emoji markers in debug logs, something more commonly associated with AI-generated output than human-written malware. The researchers noted that this isn’t definitive proof but aligns with a broader trend of attackers using large language models to accelerate malware creation. Android presently has some protection against this attack vector in the form of security alerts. “The victim needs to manually install a trojanized version of HandyPay, since the app is only available outside Google Play,” the researchers said. “When a user taps the download app button in their browser, Android automatically blocks the install and shows a prompt asking them to allow installation from this source.” For the attack to be successful, the user then needs to tap Settings in the prompt, enable “Allow from this source,” and return to installing the app, a process quite common with third-party app installation these days. Nothing particularly suspicious stands out in the “allow download” workflow to protect against this threat. ESET shared a list of indicators in a dedicated GitHub repository, which included files, hashes, network indicators, and MITRE ATT&CK maps to support detection efforts.
Lotus Wiper Malware Targets Venezuelan Energy Systems in Destructive Attack
Cybersecurity researchers have discovered a previously undocumented data wiper that has been used in attacks targeting Venezuela at the end of last year and the start of 2026. Dubbed Lotus Wiper, the novel file wiper has been used in a destructive campaign targeting the energy and utilities sector in Venezuela, per findings from Kaspersky. "Two batch scripts are responsible for initiating the
When Wi-Fi Encryption Fails: Protecting Your Enterprise from AirSnitch Attacks
Unit 42 research reveals AirSnitch attacks bypass WPA2/3 Wi-Fi encryption and client isolation, exposing critical infrastructure vulnerabilities. The post When Wi-Fi Encryption Fails: Protecting Your Enterprise from AirSnitch Attacks appeared first on Unit 42.
Microsoft Patches Critical ASP.NET Core CVE-2026-40372 Privilege Escalation Bug
Microsoft has released out-of-band updates to address a security vulnerability in ASP.NET Core that could allow an attacker to escalate privileges. The vulnerability, tracked as CVE-2026-40372, carries a CVSS score of 9.1 out of 10.0. It's rated Important in severity. An anonymous researcher has been credited with discovering and reporting the flaw. "Improper verification of cryptographic
Anthropic bets on EPSS for the coming bug surge
Anthropic’s Mythos has intensified a problem that vulnerability management programs were already struggling to contain: too many vulnerabilities and not enough clarity about which ones matter. What changes with Mythos — and the AI-based class of vulnerability discovery systems it represents — is the speed at which software flaws can be found and exploited. That speed raises a more immediate question for defenders: Which vulnerabilities require action? Anthropic has pointed to one method. In guidance tied to its work on AI-accelerated offense, the company recommended using the Exploit Prediction Scoring System (EPSS), a probabilistic model developed by the data scientists behind Empirical Security, and published through FIRST, as a way to triage vulnerabilities as discovery increases. According to Anthropic, “Patching the KEV [CISA’s Known Exploited Vulnerabilities catalog] list first, and then everything above a chosen EPSS threshold will help you turn thousands of open CVEs into a manageable queue.” “EPSS uses the same probabilistic models that weather forecasters do,” Michael Roytman, co-founder and CTO of Empirical Security and one of the original EPSS authors, told CSO. “The forecast is which vulnerabilities are likely to be exploited somewhere on the internet in the next 30 days.” Roytman added, “We don’t deal with rain by constantly having an umbrella over our heads. We have predictive models that tell us whether we should or should not bring an umbrella.” Ed Bellis, CEO of Empirical Security, told CSO that Anthropic’s recommendation stood out because of who made it, not because EPSS is new. According to Bellis, it was the first time, to his knowledge, that a large language model provider had explicitly endorsed a probabilistic, purpose-built model for vulnerability prioritization. A system already under strain Mythos arrives as the vulnerability ecosystem is already under strain. Most recently, the volume of new vulnerabilities forced NIST to scale back enrichment of its National Vulnerability Database (NVD) to only certain CVEs. The NVD enriches vulnerability reports with CVSS scores, which are developed by FIRST, while EPSS provides a separate estimate of exploitation likelihood. “The fact that they’re [NIST] narrowing down the vulnerabilities that they are going to focus on [for CVSS] is because it’s all human-driven,” Bellis said. EPSS, by contrast, is machine-driven and can be applied across all CVEs, with scores published daily. “It’s machine-driven, and it’s a machine learning model that ultimately scores that vulnerability,” Bellis added. “The average vulnerability management practice today is not thinking about it from a machine-learning, data-driven perspective, but they could be.” According to the Zero Day Clock, the mean time to exploit a vulnerability after it’s been discovered is going to reach one hour this year, and only one minute by 2028, down from 2.3 years in 2018. Security leaders weigh promise versus reality Security vendors are increasingly incorporating EPSS scores into their systems. According to Roytman, EPSS has been incorporated into more than 120 security vendors’ products, including CrowdStrike, Cisco, Palo Alto Networks, Qualys, and Tenable platforms. “I do not think other CISOs realize how broadly EPSS has been adopted, but that adoption is great news for the industry,” James Robinson, CISO at Netskope, told CSO. “EPSS, when applied to [software flaws], is an essential step in being able to know if this exploitable vulnerability applies to your implementation or operation,” he said, adding that “the role that EPSS can play in identifying non-CVE vulnerabilities identified from Mythos and other upcoming models is extremely useful.” Aaron Weismann, CISO at Main Line Health, welcomed the faster discovery of vulnerabilities but questioned whether the guidance translates to sectors such as healthcare, telling CSO, “It’ll be interesting to see how actionable those recommendations are for critical infrastructure — like healthcare, utilities, government, and others — where immediate and automated patching can be challenging due to the prevalence of legacy hardware and software.” Not all defenders embrace the concept of EPSS or even CVSS to address the rapid discovery of vulnerabilities. “To be direct: Both CVSS and EPSS are fundamentally outdated in the ‘Mythos’ era and require a complete rethink,” Ramy Houssaini, chief cyber solutions officer of Cloudflare, told CSO. “EPSS relies on lagging, 30-day historical data, but AI has collapsed the time-to-exploit into mere minutes. Instead of waiting for a predictive score to prioritize human-speed patching, organizations must shift to real-time defense.” Exposure management will extend beyond CVEs While most of the analysis of the power of Mythos to discover vulnerabilities has centered on common applications to which CVEs can be applied, its discoveries will most likely reveal millions of other vulnerabilities that don’t meet this definition. “A similar process is happening across clouds and applications, where there is no common enumerator across those applications,” Empirical Security’s Roytman said. “My application looks very different than yours, even if it’s written in the same language,” he added. “So, when we think about that probabilistic modeling expanding to all of exposure management, which might be a bigger problem than just CVEs themselves, we have to think about building local predictive models for applications, clouds, configurations, misconfigurations, and that is another exercise in taking advantage of the existing security tooling and building small, purpose-built models rather than having humans do the manual triage work.” In short, Mythos and competing AI models will soon be able to find millions and millions of vulnerabilities that will not fit into the CVE model. “We see enterprises all the time that might have tens of millions of open instances of vulnerabilities, let alone the sheer volume of those classes of flaws that they’re going to discover on the AI front,” Bellis said. “This is a problem, but the sky is not falling,” Roytman said. “There are methods for managing it.”
Mustang Panda’s New LOTUSLITE Variant Targets India Banks, South Korea Policy Circles
Cybersecurity researchers have discovered a new variant of a known malware called LOTUSLITE that's distributed via a theme related to India's banking sector. "The backdoor communicates with a dynamic DNS-based command-and-control server over HTTPS and supports remote shell access, file operations, and session management, indicating a continued espionage-focused capability set rather than
Cohere AI Terrarium Sandbox Flaw Enables Root Code Execution, Container Escape
A critical security vulnerability has been disclosed in a Python-based sandbox called Terrarium that could result in arbitrary code execution. The vulnerability, tracked as CVE-2026-5752, is rated 9.3 on the CVSS scoring system. "Sandbox escape vulnerability in Terrarium allows arbitrary code execution with root privileges on a host process via JavaScript prototype chain traversal," according to
CrowdStrike Falcon Cloud Security Delivered 264% ROI Through Unified Cloud Protection
SBOM erklärt: Was ist eine Software Bill of Materials?
Softwareentwicklung und Autoproduktion haben mehr gemein, als man denkt. Lesen Sie, was Sie zum Thema Software Bill of Materials (SBOM) wissen sollten. Foto: Ju1978 – shutterstock.comEine Software Bill of Materials ist ein detaillierter Leitfaden, der unter anderem Aufschluss über die Komponenten Ihrer Software gibt. Als eine Art Stückliste hilft eine SBOM Anbietern und Käufern gleichermaßen, den Überblick über die Komponenten zu behalten und die Sicherheit der Softwarelieferkette zu verbessern. SBOM – Definition Eine Software Bill of Materials ist eine formale, strukturierte Aufzeichnung, die die Komponenten eines Softwareprodukts und ihre Beziehungen innerhalb der Softwarelieferkette beschreibt. Eine SBOM gibt also einerseits an, welche Pakete und Bibliotheken in Ihre Anwendung eingeflossen sind, andererseits auch die Beziehung zwischen diesen Paketen und Bibliotheken und anderen vorgelagerten Projekten. Das ist besonders wichtig ist, wenn es um wiederverwendeten Code und Open-Source-Komponenten geht. Sie kennen Stücklisten vielleicht im Zusammenhang mit Neuwagen. In diesem Fall handelt es sich um ein Dokument, das jede Komponente, die sich in Ihrem neuen Fahrzeug befindet, detailliert beschreibt. Auch wenn Ihr Auto von Toyota oder General Motors zusammengebaut wurde: Viele seiner Komponenten stammen von Subunternehmern auf der ganzen Welt. Die Stückliste gibt Aufschluss darüber, woher jedes einzelne dieser Teile stammt. Das dient nicht nur der Transparenz, sondern auch der Sicherheit: Wird eine bestimmte Serie von Airbags zurückgerufen, müssen die Fahrzeughersteller schnell herausfinden können, wo diese verbaut sind. Da Open-Source-Bibliotheken von Drittanbietern sich jedoch zunehmender Beliebtheit erfreuen, um containerisierte, verteilte Applikationen zu erstellen, weisen Softwareentwicklung und Fahrzeugfertigung inzwischen mehr Gemeinsamkeiten auf, als man denkt. Sowohl Entwickler als auch Benutzer können eine Software Bill of Materials verwenden, um nachzuvollziehen, welche Bestandteile in die Software eingeflossen sind, wie sie verteilt und verwendet wurden. Das erlaubt – insbesondere aus Sicherheitsperspektive – eine Reihe wichtiger Rückschlüsse. Software Bill of Materials – Vorteile Die Zeiten monolithischer, proprietärer Codebasen sind längst vorbei. Moderne Anwendungen basieren oft auf in großen Teilen wiederverwendetem Code – häufig mit Beteiligung von Open-Source-Bibliotheken. Diese Anwendungen werden auch zunehmend in kleinere, in sich geschlossene Funktionskomponenten, so genannte Container, aufgeteilt, die über Orchestrierungsplattformen wie Kubernetes gemanagt und lokal oder in der Cloud ausgeführt werden. Im Großen und Ganzen waren diese Veränderungen ein Segen für die Softwareentwicklung und haben dazu beigetragen, die Entwicklerproduktivität zu erhöhen und Kosten zu senken. Aus Security-Perspektive sieht das Bild nicht ganz so rosig aus: Indem sie sich in hohem Maße auf den Code von Drittanbietern verlassen, (deren interne Prozesse sie möglicherweise nicht oder nur teilweise kennen), haben Entwickler eine Lieferkette von Softwarekomponenten geschaffen, die genauso komplex ist, wie die von Herstellern physischer Produkte. Da eine Anwendung jedoch nur so sicher ist wie ihre schwächste Komponente, kann dieses Gebahren gravierende Schwachstellen zur Folge haben. Die 2020er Jahre waren von einer Reihe von Angriffen auf die Softwarelieferkette geprägt, die für Schlagzeilen sorgten: Ende 2020 gelang es Hackern, die mit dem russischen Geheimdienst in Verbindung stehen sollen, eine Backdoor in die Netzwerk-Monitoring-Plattform von SolarWinds einzuschleusen. Diese wird wiederum von anderen Sicherheitsprodukten genutzt, was zu ihrer Kompromittierung führte. Ende 2021 wurde eine schwerwiegende Sicherheitslücke in Apache Log4j entdeckt, einer Java-Bibliothek, die für die Protokollierung von Systemereignissen verwendet wird. Das hört sich nur so lange langweilig an, bis man feststellt, dass fast jede Java-Anwendung Log4j in irgendeiner Form verwendet und damit angreifbar wird. Diese Sicherheitskrisen verdeutlichen die potenzielle Rolle der Software Bill of Materials innerhalb der Sicherheitslandschaft. Viele Anwender haben vielleicht nur beiläufig von diesen Schwachstellen gehört, waren sich aber nicht bewusst, dass sie Log4j oder eine andere SolarWinds-Komponente verwenden. Mit einer SBOM wissen Sie genau, welche Pakete Sie installiert haben – und vor allem, welche Versionen dieser Pakete. So können Sie bei Bedarf aktualisieren, um auf der sicheren Seite zu sein. Eine Software Bill of Material kann auch über die Sicherheit hinausgehen: SBOMs können Entwicklern beispielsweise dabei helfen, den Überblick über die Open-Source-Lizenzen ihrer verschiedenen Softwarekomponenten zu behalten, was wichtig ist, wenn es darum geht, Applikationen zu distribuieren. SBOMs – Pflicht in den USA und bald auch in Europa Der SolarWinds-Hack hat insbesondere bei der US-Regierung die Alarmglocken schrillen lassen – auch weil viele US-Bundesbehörden die kompromittierte Komponente eingesetzt hatten. Deshalb enthielt die im Mai 2022 von der Biden-Regierung erlassene Cybersecurity-Verordnung auch Richtlinien im Zusammenhang mit Software Bill of Materials. Das US-Handelsministerium veröffentlichte einen Leitfaden, welche grundlegenden Elemente in SBOMs enthalten sein müssen. Obwohl sich die Anordnung speziell auf diejenigen bezieht, die in direkter Beziehung zu den US-Bundesbehörden stehen, werden die Regelungen weitergehende Auswirkungen haben. Schließlich werden die an die US-Regierung verkauften Produkte, die nun mit einer SBOM ausgeliefert werden müssen, größtenteils auch an andere Unternehmen und Organisationen verkauft. Viele Softwarehersteller hoffen, dass die Kunden aus der Privatwirtschaft SBOMs ebenfalls als Mehrwert betrachten. Außerdem ist das staatliche Auftragswesen selbst eine Lieferkette, wie Sounil Yu, ehemaliger Chief Security Scientist bei der Bank of America sowie CISO bei JupiterOn, unterstreicht: “Es gibt nur eine bestimmte Anzahl von Unternehmen, die direkt mit der US-Regierung zusammenarbeiten und von der Verordnung betroffen sind. Die Auswirkungen auf der zweiten Zuliefererebene sind noch wesentlich größer.” In Europa wird die SBOM ebenfalls verpflichtend – und zwar im Rahmen der Umsetzung des Cyber Resilience Act bis Ende 2027. Software Bill of Materials – Aufbau Als Reaktion auf die Executive Order veröffentlichte die National Telecommunications and Information Administration (NTIA) im Juli 2021 den Leitfaden “The Minimum Elements For a Software Bill of Materials” (PDF). Das Dokument könnte zu einem De-facto-Standard für SBOMs in der gesamten Branche werden und legt sieben Datenfelder fest, die jede SBOM enthalten sollte: Name des Anbieters: Der Name einer Einheit, die eine Komponente erstellt, definiert und identifiziert. Komponentenname: Die Bezeichnung, die einer vom ursprünglichen Lieferanten definierten Softwareeinheit zugewiesen wird. Version der Komponente: Eine Kennung, die vom Lieferanten verwendet wird, um eine Änderung der Software gegenüber einer zuvor identifizierten Version anzugeben. Andere eindeutige Identifikatoren: Andere Informationen, die verwendet werden, um eine Komponente zu identifizieren oder als Nachschlageschlüssel für relevante Datenbanken dienen. Das könnte etwa ein Identifikator aus dem NIST CPE Dictionary sein. Abhängigkeitsbeziehung: Kennzeichnet die Beziehung, in der eine Upstream-Komponente X in Software Y enthalten ist. Das ist besonders wichtig für Open-Source-Projekte. Autor der SBOM-Daten: Der Name der Entität, die die SBOM-Daten erstellt. Zeitstempel: Aufzeichnung des Datums und der Uhrzeit der Zusammenstellung der SBOM-Daten. SBOMs müssen darüber hinaus auch folgende Anforderungen erfüllen: Die SBOM muss in einem von drei standardisierten Formaten vorliegen, damit sie maschinenlesbar ist – SPDX, CycloneDX oder SWID-Tags. Mit jeder neuen Softwareversion muss eine neue SBOM generiert werden, um sicherzustellen, dass sie auf dem neuesten Stand ist. Die SBOM muss nicht nur Abhängigkeitsbeziehungen enthalten, sondern auch Aufschluss darüber geben, wo solche Beziehungen wahrscheinlich bestehen, aber der Organisation, die die SBOM erstellt, unbekannt sind. SBOM erstellen – so geht’s Wenn Sie diesen Artikel lesen, empfinden Sie es möglicherweise als entmutigende Aufgabe, eine Software Bill of Materials zu erstellen. Schließlich muss es ein Alptraum sein, all diese Informationen manuell zusammenzutragen. Glücklicherweise werden SBOMs in den meisten Fällen mit Hilfe von SCA-Tools (Software Composition Analysis ) automatisch erstellt. Diese Tools werden häufig in DevSecOps-Pipelines eingesetzt und spielen nicht nur für die Erstellung von SBOMs eine Rolle. SCA-Tools durchsuchen Ihre Codeverzeichnisse nach Paketen und vergleichen sie mit Online-Datenbanken, um sie mit bekannten Bibliotheken abzugleichen. Es gibt aber auch Werkzeuge, die eine Software Bill of Materials im Rahmen des Software-Build-Prozesses erstellen. Die OWASP Foundation hat eine umfassende Liste von SCA-Tools zusammengestellt, die von einfachen, quelloffenen Kommandozeilen-Tools bis hin zu spezialisierten, kommerziellen Produkten reicht. Wenn Sie tiefer in diesen Bereich eintauchen möchten, sollten Sie außerdem einen Blick auf unseren Artikel “7 Tools, die Ihre Softwarelieferkette absichern” werfen. Wenn Sie verteilte Software entwickeln, wird es immer wichtiger, SBOMs in Ihre Entwicklungspraxis zu integrieren. Auch wenn Sie keine Verträge mit der US-Regierung abschließen – Sie sollten sich angesichts der Bedrohungslage in jedem Fall Gedanken über die Sicherheit ihrer Softwarelieferkette machen.
Thousands of Apache ActiveMQ instances still unpatched, weeks after an actively exploited hole discovered
Two weeks after researchers using an AI tool discovered a major hole in Apache’s ActiveMQ messaging middleware, there are still thousands of unpatched instances open to the internet, more evidence that many application developers and IT leaders aren’t paying close attention to warnings about vulnerabilities. While the remote code injection vulnerability [CVE-2026-34197] was revealed on April 7, according to statistics from the ShadowServer Foundation, there are still almost 6,500 unpatched instances of ActiveMQ open to being abused. “The fact that ShadowServer is still seeing 6,000+ unpatched boxes nearly two weeks later is just mind-blowing,” IT analyst Rob Enderle of the Enderle Group told CSO. “In a world where an LLM can help an attacker weaponize a bug the second it’s announced, taking 12 days to patch is essentially a suicide note for your network”. Vulnerable are versions of ActiveMQ and ActiveMQ Broker before 5.19.4, and 6.0 to before 6.2.3; this means the flaw could have been exploited for over a decade. ActiveMQ Artemis isn’t affected. The issue is so serious that the US Cybersecurity and Infrastructure Security Agency (CISA) added the bug to its known and exploited vulnerability list (KEV) this week, urging federal agencies to promptly update their applications. The move should also be seen by private sector developers who use ActiveMQ in their applications, and IT and security leaders who have apps using ActiveMQ in their environments, as a cue to act fast and upgrade to patched versions 5.19.4 or 6.2.3. Bug found by AI in 10 minutes The hole was discovered by researchers at Horizon3.ai using Anthropic’s Claude AI assistant. It took them about 10 minutes, an illustration of how quickly modern AI tools can be used by experts to find vulnerabilities. Anthropic says its limited release Claude Mythos tool is even better than Claude at finding flaws. Apache says an authenticated attacker can exploit the hole with a crafted discovery URI that triggers a parameter to load a remote Spring XML application context using ResourceXmlApplicationContext.  Because Spring’s ResourceXmlApplicationContext instantiates all singleton beans before the BrokerService validates the configuration, arbitrary code execution occurs on the broker’s Java VM through bean factory methods such as Runtime.exec. “This vulnerability sat there for 13 years,” noted Enderle. “Humans missed it, scanners missed it, but Claude finds it in what, 10 minutes? That’s a massive capability leap. AI is basically acting like an archeologist for exploits, digging up every skeleton we’ve left in our legacy closets for the last decade.” The problem for CSOs is “we’re basically bringing a knife to an AI gunfight,” he added. “Most IT shops are still stuck in ‘Human-Speed,’ waiting for a weekend maintenance window or a committee meeting, while the bad guys are running at ‘Machine-Speed.’ If you aren’t automating your defense and using AI to patch as fast as AI is finding the holes, you aren’t just behind; you’re already breached and just don’t know it yet.” Automation is key “If a company hasn’t patched this by now, it’s moved past a ‘resource issue’ and straight into professional negligence,” Enderle said. “We’ve got to stop treating patching like a chore and start treating it like a survival requirement.” The fix is simple, but hard for most old-school IT shops to swallow, he noted: Get the humans out of the way. “If AI is finding holes in minutes,” he said, “a 12-day manual patch cycle is basically an invitation to get robbed.” Start by putting together a software bill of materials for every app in your environment, Enderle advised. “Without it, you’re just guessing what’s under the hood. You need a live, automated inventory, using standards like CycloneDX, so the second a bug like this [ActiveMQ] hits, you aren’t scanning. You already know exactly which apps are carrying the poisoned ingredient.” Second, he said, auto-patch the small stuff and use automated testing for the big systems. Again, he maintained that if IT is still waiting for a weekend maintenance window or a committee approval to fix a critical flaw, “you’re playing a 2010 game in a 2026 world.”  “Bottom line,” he said: “If you don’t know what’s in your software, and you can’t fix it faster than an LLM can find it, you’re just a target.”
SystemBC C2 Server Reveals 1,570+ Victims in The Gentlemen Ransomware Operation
Threat actors associated with The Gentlemen ransomware‑as‑a‑service (RaaS) operation have been observed attempting to deploy a known proxy malware called SystemBC. According to new research published by Check Point, the command-and-control (C2 or C&C) server linked to SystemBC has led to the discovery of a botnet of more than 1,570 victims. "SystemBC establishes SOCKS5 network tunnels within
22 BRIDGE:BREAK Flaws Expose Thousands of Lantronix and Silex Serial-to-IP Converters
Cybersecurity researchers have identified 22 new vulnerabilities in popular models of serial-to-IP converters from Lantronix and Silex that could be exploited to hijack susceptible devices and tamper with data exchanged by them. The vulnerabilities have been collectively codenamed BRIDGE:BREAK by Forescout Research Vedere Labs, which identified nearly 20,000 Serial-to-Ethernet converters exposed
‘Scattered Spider’ Member ‘Tylerb’ Pleads Guilty
A 24-year-old British national and senior member of the cybercrime group "Scattered Spider" has pleaded guilty to wire fraud conspiracy and aggravated identity theft. Tyler Robert Buchanan admitted his role in a series of text-message phishing attacks in the summer of 2022 that allowed the group to hack into at least a dozen major technology companies and steal tens of millions of dollars worth of cryptocurrency from investors.
Ransomware Negotiator Pleads Guilty to Aiding BlackCat Attacks in 2023
A third individual who was employed as a ransomware negotiator has pleaded guilty to conducting ransomware attacks against U.S. companies in 2023. Angelo Martino, 41, of Land O'Lakes, Florida, teamed up with the operators of the BlackCat ransomware starting in April 2023 to assist the e-crime gang in extracting higher amounts as ransoms. "Working as a negotiator on behalf of five different
5 Places where Mature SOCs Keep MTTR Fast and Others Waste Time
Security teams often present MTTR as an internal KPI. Leadership sees it differently: every hour a threat dwells inside the environment is an hour of potential data exfiltration, service disruption, regulatory exposure, and brand damage.  The root cause of slow MTTR is almost never "not enough analysts." It is almost always the same structural problem: threat intelligence that exists
NGate Campaign Targets Brazil, Trojanizes HandyPay to Steal NFC Data and PINs
Cybersecurity researchers have discovered a new iteration of an Android malware family called NGate that has been found to abuse a legitimate application called HandyPay instead of NFCGate. "The threat actors took the app, which is used to relay NFC data, and patched it with malicious code that appears to have been AI-generated," ESET security researcher Lukáš Štefanko said in a
Azure SRE Agent flaw lets outsiders silently eavesdrop on enterprise cloud operations
A high-severity authentication flaw in Microsoft’s Azure SRE Agent exposed sensitive agent data to unauthorized network access, according to a confirmed vulnerability disclosure. The issue was identified by Enclave AI researcher Yanir Tsarimi, who detailed the findings in a blog post describing how agent interactions could be accessed without proper authentication controls. The vulnerability has been tracked as CVE-2026-32173 and rated critical with a CVSS score of 8.6. In the blog, Tsarimi described scenarios where agent activity could be observed during execution, including interactions between users and the system. The exposure stemmed from an authentication gap in the service, allowing access to data streams without valid credentials. Microsoft classified it as an improper authentication issue that allows an unauthorized attacker to disclose information over a network, the NVD entry said. “Imagine you hired an assistant who has access to everything: your servers, your logs, your passwords, your source code. Now imagine a total stranger, from a completely unrelated company, could silently listen to every conversation that assistant has,” Enclave researcher Yanir Tsarimi wrote. “That’s what we found in Azure SRE Agent.” Microsoft has since fixed the issue, the blog added. The fix was applied server-side, and Microsoft’s advisory states that no customer action is required. Azure SRE Agent reached general availability on March 10. Multi-tenant by default The agent streams all activity through a WebSocket endpoint called /agentHub, the blog said. The endpoint required a token to connect, but the underlying Entra ID app registration was configured as multi-tenant, meaning any account from any Entra ID tenant could obtain a valid token that the hub would accept. “The hub then checked: Is the token valid? Yes. Is the audience correct? Yes. It never asked: Does this caller belong to the target’s tenant? Are they authorized to use this agent? Do they have any role on this resource?” Tsarimi wrote. Once connected, the hub broadcasts all events to all clients with no identity filtering, the blog said. The exposed channel included user prompts, agent responses, internal reasoning traces, every command executed with full arguments, and the command output. “In our own test environment, we watched the agent run a routine task and return deployment credentials for live web applications,” Tsarimi wrote. “An eavesdropper on a real target would have received the same. Silently. With nothing to indicate anyone else was on the line.” Exploitation required only the target agent’s subdomain, which Enclave described as predictable and enumerable, and roughly 15 lines of Python. Third-party trackers identified the affected component as the Azure SRE Agent Gateway SignalR Hub. Watching a privileged operator think out loud The category of flaw should not be compared too closely to a conventional API bug, said Alexander Hagenah, cybersecurity researcher and executive director at Zurich-based financial infrastructure operator SIX Group. “A normal API issue is usually bound by a specific endpoint, dataset, or permission check. With an AI operations agent, the agent itself becomes the aggregation point for infrastructure state, logs, source code, incident context, commands, outputs, and sometimes credentials that appear during troubleshooting,” Hagenah said. “In practical terms, it can look like watching a privileged operator think out loud,” he added. The exposure does not amount to automatic infrastructure compromise, Hagenah said, but it can be more valuable than many read-only bugs. Attackers typically have to work hard after initial access to understand an environment. An SRE agent may already have that context assembled for them. The connection also left no trace on the victim’s side, the researcher wrote. “Victim organizations had no way to detect it, no way to investigate after the fact, and no way to scope what had been exposed.” Considerations for enterprises Enclave, as per the blog post, noted that organizations that ran Azure SRE Agent during the preview window must treat the period as potentially exposed and review any credentials, configuration data, or sensitive information that may have passed through agent conversations or CLI outputs. Hagenah said agentic operations services need to be governed more like privileged automation platforms than ordinary SaaS tools. “Before granting that level of access, I would want very clear answers on tenant isolation and resource-level authorization. It should not be enough that a token is valid. The service has to verify that the caller belongs to the right tenant, is authorized for that specific agent, and is allowed to access that specific stream, thread, tool output, or action,” he said. The agent should run under a dedicated managed identity with minimal permissions, and integrations with command execution, log query, source repositories, and incident platforms should be reviewed like any other privileged system, Hagenah said. Enterprises also need to know who connected, what threads they accessed, what commands ran, and what output was returned, with logs exportable to the SIEM. Microsoft did not immediately respond to a request for comment.
Prompt injection turned Google’s Antigravity file search into RCE
Security researchers have revealed a prompt injection flaw in Google’s Antigravity IDE that could be weaponized to bypass its sandbox protections and achieve remote code execution (RCE). The issue came from Antigravity’s ability to allow AI agents to invoke native functions, like searching files, on behalf of the user. Designed to kill complexity, the feature could allow attackers to inject malicious input into a tool parameter. According to Pillar Security researchers, the vulnerability could bypass Antigravity’s “most restrictive security configuration,” Secure Mode. The flaw was reported to Google in January, which acknowledged and fixed the issue internally, awarding Pillar Security a bounty through its Vulnerability Reward Program (VRP) for AI-specific categories. Google did not immediately respond to CSO’s request for comments. File search could be turned into code execution Pillar’s prompt injection vector relied on Antigravity’s “find_my_name” tool and an “fd” utility within. find_my_name is one of Antigravity’s built-in agent tools that allows the AI to search for files and directories in the project workspace using the fd command line. What was happening is that any string beginning with “-” was being interpreted by fd as a flag rather than a search pattern, allowing execution of binaries within files matching a “-Xsh” pattern. “The technique exploits insufficient input sanitization of the find_by_name tool’s Pattern parameter, allowing attackers to inject command-line flags into the underlying fd utility, converting a file search operation into arbitrary code execution,” the researchers said in a blog post. Essentially, instead of just locating files, “fd” could be tricked into executing attacker-supplied binaries across those files using a crafted prompt that manipulates the “Pattern” parameter. The researchers demonstrated this by creating a file in the local directory with the malicious prompt to exploit the “pattern” injected. Antigravity picked up the file, ran its intended tasks (like launching Calculator), and also launched the search tool, now primed to execute “-Xsh” patterns. This could also be turned into remote code execution via indirect prompt injection. “A user pulls a benign-looking source file from an untrusted origin, such as a public repository, containing attacker-controlled comments that instruct the agent to stage and trigger the exploit,” the researchers explained. The worst part was that it was unstoppable with the existing protection. Google’s sandbox never got a chance Antigravity’s Secure Mode, which is designed to restrict network access, prevent out-of-workspace writes, and ensure all command operations run strictly under a sandbox context, could not flag or quarantine this technique. This is because the find_my_name tool is called much before Secure Mode restrictions are evaluated. “The agent treats it as a native tool invocation, not a shell command, so it never reaches the security boundary that Secure Mode enforces,“ the researchers noted. The issue was trimmed down to a twofold root cause. A “No input validation” at the Pattern parameter, which accepts arbitrary strings without checking for legitimate search pattern characters. The second was “no argument termination,” which refers to fd’s inability to distinguish between flags and search terms. Google has already fixed the flaw internally, and Antigravity users need not do anything else to remain protected. However, the flaw’s ability to bypass Secure Mode, Pillar researchers point out, underlines that security controls focused on shell commands are insufficient. “The industry must move beyond sanitization-based controls toward execution isolation,” they said. “Every native tool parameter that reaches a shell command is a potential injection point.”
No Exploit Needed: How Attackers Walk Through the Front Door via Identity-Based Attacks
The cybersecurity industry has spent the last several years chasing sophisticated threats like zero-days, supply chain compromises, and AI-generated exploits. However, the most reliable entry point for attackers still hasn't changed: stolen credentials. Identity-based attacks remain a dominant initial access vector in breaches today. Attackers obtain valid credentials through credential stuffing
Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution
Cybersecurity researchers have discovered a vulnerability in Google's agentic integrated development environment (IDE), Antigravity, that could be exploited to achieve code execution. The flaw, since patched, combines Antigravity's permitted file-creation capabilities with an insufficient input sanitization in Antigravity's native file-searching tool, find_by_name, to bypass the program's Strict
Why identity is the driving force behind digital transformation
Identity centric technologies have undergone a significant transformation in recent times. Gone are the days when it was all about logging in and out of any given system. Today, identity has become the backbone of all digital enterprises. It’s the ‘invisible engine’ that powers everything. From security to how modern-day products are sold. Today’s Identity based frameworks not only controls who can access what, how and when, they also help businesses work efficiently, improves customer satisfaction and reduces fraud and risk, especially associated with back-office jobs. In this article, we’ll look at why identity is key and how it supports several key aspects of digital transformation. Identity is the new security boundary Traditionally, enterprises used firewalls and internal network policies to protect themselves against any external attacks. If you were inside the company network, then trust was automatically granted. If you were not, you were perceived as a threat. That world no longer exists. Because, unlike in the past, companies have employees working from different geographic locations or work from home. Most systems are hosted in the cloud. Customers can access services from mobile devices. And even programs and bots require access to the system. This means that identity is the new perimeter. And traditional methods of securing systems won’t work anymore, as there’s no clear definition of who is ‘inside’ or ‘outside’ the perimeter anymore. Instead of relying on location to grant access, verification is performed on the person or system making the request, and subsequently authorization checks are performed to allow the requested action. Managing user access is not easy at an enterprise scale. And it doesn’t get any easier for those using complicated network rules and manual setups. In fact, it often results in errors and delays. This is where identity-based solutions come into play. When someone from any team logs in, the identity system will accurately pinpoint: Who they are and what they are up to. The project they are working on. Which environment should they use? Using this information, the system can determine which resource someone needs, when they need it and how to use it. The principle behind it is ‘never trust, always verify’. With it, errors that normally occur are reduced, less manual configuration is required and overall efficiency and accountability increase. When something goes haywire, it becomes easy for the enterprise to track which resource was accessed by whom and when. This helps teams move faster without losing control. How identity helps software teams work faster Software is usually managed in various stages during its creation. To do this effectively, companies have different test environments, such as: Development Testing Staging Performance testing For all these environments, we’ve got different teams working simultaneously on the same software. For example, when development teams are working on building new features for the software, business users would be validating the beta version in the parallel testing environment. Modern Identity structure easily carries this context in the message and helps route transactions to the appropriate environment. Identity helps to control exactly what people can see and do Every organization has its own hierarchical structure. Within it, everyone has limitation to what they can access or see. For instance, a junior officer cannot have the same privileges as a manager. Similarly, a manager cannot have the same authorization as the CEO. If everyone had the same access, it would create a serious security risk. This is where modern identity systems shine. It stores information about users based on department, job description, location, level of responsibility and whether the user has special permissions. When logging in, this information travels with them. The application uses this to determine which information to disclose and which to restrict. Put simply, some users see certain menu options while others using the same system can’t see them at all. Similarly, others might have the ability to read and write data, while others can only view it. This is what is known as fine-grained access control, where access is given to users when they truly need it. Some of its benefits are: Enhanced security against internal misuse of data. Reduced data leaks. Makes it easy to comply with data protection laws. Auditing and filing of reports are simplified. Beyond security: Identity powers customer personalization Identity goes beyond just managing employee access. It helps the business grow as it manages crucial customer profile information such as preferences, purchase history, product interest and consent for data use. The data collected is used to market personalized products, send relevant offers, show content based on previous browsing history and even communicate in their customer’s preferred language. Before customer identity management systems, all this information was scattered across different systems. One database could handle emails, another purchase history and another might track website visits. With unified identity management, all this information is summarized under one customer. This translates to better customer experience, higher conversion rates, increased customer loyalty and better marketing. Plus, when customers see how their data is being handled, they are more likely to trust the brand and give permissions for their data to be used. Identity reduces risk and prevents fraud in finance This is where identity is needed the most because financial institutions, such as banks, deal with sensitive information and large amounts of money. Any slight error in processing data could easily incur huge losses and serious repercussions to the institution. In many cases, most customers usually have multiple accounts: Savings account Credit card Mortgage Investment account Business account All these accounts usually exist in different systems. With centralized identity systems, they can all be linked using a single identifier and traced back to one verified customer. This creates a complete financial picture of the customer. Better risk assessment With a clear picture, banks can make informed decisions, which in the long run helps reduce losses. We’re talking about smarter lending decisions, better assessment of risks, income and debt, repayment history, just to mention a few. Stronger fraud detection For any business to stand a chance against sophisticated modern cyberattacks like fraud, early detection is key. With AI-based identity security, detection takes place in real time. So, when someone makes a transaction, the system cross-checks with information such as login location, device type, behavioral patterns and transaction history. If an issue arises during this time, the system can either request extra verification or block the transaction entirely. Detecting fake identities Criminals today are evolving almost at the same pace as technology. To avoid detection, some of them create fake identities by mixing real and false information. Without strong security measures in place, most of them usually get away with it. To prevent this, identity systems based on vast information collected can be able to tell what ‘normal’ looks like for each customer and what doesn’t make sense. For example, when one personal number is linked to multiple unrelated accounts. Building identity as core infrastructure To support the areas this article talked about, it’s crystal clear that organizations can’t just treat identity as an old-fashioned list of names. It must be woven within the very foundation of the business. Here are three golden rules to make that happen: 1. It must be ‘real time’ The system should always share updates whenever they occur. For example, when a user logs in or changes their privacy settings, the information should be propagated throughout the entire system so that other parts of the company can react. 2. It must be easy to integrate with other systems They should be like plug-and-play tools that allow developers to easily connect with others without necessarily needing any assistance from a specialist. 3. It must be built for governance Not everyone needs to have unlimited access to the system. Each organization needs to have a clear set of rules on who gets access to what and when. On top of that, these permissions need to be reviewed from time to time, and all the activities tracked. This not only ensures the company stays safe but also complies with the law. Identity is the foundation of modern business Time and time again, most people often associate digital transformation with advanced new technology. But it’s not just about that. It involves connecting systems, data and the people using these resources smartly and securely. Identity makes this possible. It ensures that only the right users access the right resources at the right time. With identity, software developers are creating and deploying applications much faster, organizations get to control access to sensitive information, businesses can create personalized customer experiences and banks can detect and manage fraud right before it occurs. Therefore, as more businesses continue their migration towards digital transformation, identity needs to be established as the foundation. Those who do this are better positioned to grow, innovate and compete in this digital age. This article is published as part of the Foundry Expert Contributor Network.Want to join?
Top techniques attackers use to infiltrate your systems today
Much of the talk around cybersecurity these days revolves around AI and the threat it poses to corporate systems when used by nefarious actors. But the reality on the ground remains a little more mundane than polymorphic AI malware and criminal masterminds putting machine learning and generative AI to work at scale. Still, keeping on top of even minor nuances and emerging trends in the techniques cyberattackers are deploying of late can greatly help cyber defenders in their task. Of note is the fact that attackers are increasingly exploiting identity as a preferred method for infiltrating systems. While exploiting vulnerabilities also remains an important vector with its own emerging subtleties in practice, phishing, stolen credentials, and social engineering are among the more common root causes of initial attack today, according to threat response experts. “Identity-related attack techniques such as phishing (41%), stolen credentials (18%), and social engineering (12%) dominating our incident response engagements,” Alexandra Rose, director at the Counter Threat Unit at Sophos, tells CSO. Rose adds: “Attackers are increasingly looking to leverage weaknesses that can’t be targeted by patching — instead going after the human link in the chain: people.” Entry points created by expanding hybrid and cloud environments, integrations with AI tooling, and new SaaS apps are also particularly attractive to threat actors, allowing them to infiltrate systems without needing to deploy traditional malware. “Attackers [are exploiting] trusted tools, identities, and user behaviour rather than relying on technical sophistication” to mount attacks, according to threat intel vendor ReliaQuest’s latest Annual Cyber-Threat Report. Here, cyber experts quizzed by CSO identify the most prevalent cyberattack techniques being deployed against enterprises today. Drive-by RMM misuse Attackers have increasingly been abusing legitimate remote monitoring and management (RMM) tools to camouflage attacks on corporate networks. Designed to help IT teams manage systems remotely, popular RMM tools, such as ConnectWise ScreenConnect, Tactical RMM, and MeshAgent, are often abused by attackers for command-and-control, lateral movement, and ransomware deployment. Now, trojanized versions of RMM tools are being dropped directly onto hosts, often through drive-by compromise, according to ReliaQuest. ConnectWise ScreenConnect led RMM-related incidents between December 2025 up until the end of February 2026, according to the threat intel vendor. A separate study by managed detection and response firm Blackpoint found that abuse of legitimate RMM tools represented 30% of incidents handled by the firm. Network security device hacking Network edge devices have increasingly drawn attackers’ attention over the past two years, establishing a new battleground where the very devices meant to protect the network have become attractive targets for exploitation. As a result, flaws in security device, such as SSL VPN systems and other gateways, are among the top initial access vectors for attackers. SSL VPN compromises, for example, accounted for 33% of identifiable activity, according to Blackpoint. ClickFix ClickFix is a social engineering tactic that aims to trick prospective marks into pasting and executing malicious PowerShell commands from fake “fix” prompts. Because these bogus prompts come from either compromised websites or manipulated search results, the approach bypasses traditional security controls such as email filters or denylists. ClickFix scams often uses fake CAPTCHA pages as the lure. The methodology is most frequently used to distribute remote access trojans or infostealers, but attackers have also begun to feature ClickFix in ransomware attacks. “ClickFix adoption continues to expand across the attacker spectrum, with ransomware operators like LeakNet now using ClickFix lures to run campaigns directly rather than purchasing access from initial access brokers,” according to ReliaQuest. Identity-based attacks Attackers are increasingly impersonating legitimate users, machines, or services to gain access to systems, data, or infrastructure. The technique is on the upswing in part due to improved security defenses, according to some experts, and also demonstrates attackers’ interest in targeting authentication mechanisms rather than exploiting software vulnerabilities directly. “Endpoint detection and response technologies have pushed criminals into stealing credentials — or buying them from thieves — and then using them for authentication as account users,” says Tom Exelby, head of cybersecurity at UK-based cybersecurity services firm Red Helix. “Once they have access, they can augment their privileges through systems such as Microsoft Active Directory and Entra ID.” Instead of stealing passwords, attackers steal active authentication tokens to bypass multi-factor authentication (MFA) protections. Attackers are increasingly using OAuth consent phishing and reverse proxy kits to steal session tokens and bypass MFA, adds cloud-native security firm Netskope. “Attackers targeting Microsoft 365 environments are also adopting adversary-in-the-middle attacks,” Red Helix’s Exelby adds. “They capture credentials, MFA responses, and session cookies by using phishing kits as a proxy between the target and the legitimate authentication service.” Cybercriminals are using platforms such as the Tycoon 2FA phishing-as-a-service to run adversary-in-the-middle (AiTM) attacks. Many of the victims of this attack vector are “likely to be SMBs with limited cybersecurity resources,” according to Red Helix. Phishing Despite a year-over-year decline in the number of people clicking on phishing links, in part due to improved user education, this traditional form of social engineer remains a problem. According to a recent study by Netskope, 87 out of every 10,000 users click on a phishing link each month. Microsoft remains the brand attackers impersonate most. Remote and hybrid workforces have given attackers more opportunities for phishing and credential theft, and now the power of AI in facilitating such attacks is becoming a major concern. Cybercriminals have been putting AI to use to develop highly personalized phishing lures, automated reconnaissance, and synthetic voice and deepfake attacks. Hacking machine identities The rapid profileration of machine identities is proving to be a wellspring for attackers seeking inroads into corporate systems. Much of this is due to increased use of service accounts, containers, APIs, and the automation of DevOps, but agentic AI, with its promise of autonomous AI activity, is another rising source of concern for security organizations. “With non-human identities central to infrastructure, attackers are inevitably focusing on compromise of service accounts and API identities, which give them long-lived credentials and a broad range of permissions,” says Red Helix’s Exelby. Exelby adds: “Machine identities often have weak protection, are notoriously invisible, and poorly managed.” Managed service providers that hold privileged access to many client’s systems have a magnetic attraction for attackers as a potential route to carry out supply chain attacks. Even a midsize business is likely to have hundreds of SaaS apps and thousands of identities criminals can exploit. Shai-Hulud: The supply-chain attack evolves In September 2025, credential-stealing code wormed its way through scores of npm libraries, adding a modern twist to the supply chain attack. What would become known as Shai-Hulud included self-propagation logic that would eventually spread to hundreds of packages by automatically replicating and injecting itself into projects owned by compromised maintainers. Later versions of the npm supply-chain worm (“Shai-Hulud 2.0”) have expanded into cloud credential theft, making it the most significant new entry in ReliaQuest’s attack technique list since the previous edition last year. “The self-replicating nature [of the malware] makes containment particularly difficult once it enters a development pipeline,” ReliaQuest warns. Countermeasures Defenders should prioritize ClickFix-specific user training, enforce remote monitoring and management (RMM) tool allowlists, and centralize SaaS audit logging, ReliaQuest advises. Protection against the tide of identity-based attacks requires a shift to layered defenses. “Layered defences should include phishing-resistant authentication with hardware security keys, FIDO2 password-free approaches or certificate-based methods to reduce credential theft and adversary-in-the-middle attacks,” says Red Helix’s Exelby. Exelby adds: “Zero trust and least privilege access principles are essential, validating continuously using device posture, user behaviour and network context, along with risk-scoring. Time-bound access for accounts should be part of this.”
The thin gray line: Handala, CyberAv3ngers and Iran’s proxy ops
On April 7, six US government agencies issued a critical advisory warning domestic private sector organizations of potential infrastructural cyberattacks conducted by Iranian-affiliated Advanced Persistent Threat (APT) actors. The advisory stops short of attributing these threats to a single group but makes reference to 2023 attacks on US water and wastewater facilities linked to the known Iranian APT “CyberAv3ngers”, suggesting a possible correlation between historical and current incidents. Reports on “CyberAv3ngers” and analogous group “Handala Hack Team” — who have recently been in headlines for their numerous clashes with the FBI — emphasize that while these operations present themselves as radical pro-Palestinian hacktivist collectives, both are believed to be heavily-resourced and directly tied to the Iranian Ministry of Intelligence (MOIS). Sometimes referred to as “fronts”, “proxy insurgents” or “ghost groups”, these presumed false flag operations represent a longstanding obfuscation tactic amongst the so-called “Big Four” of cybercrime — Russia, China, North Korea and Iran. Notably, Russia’s largest military intelligence agency, the GRU, is widely known to recruit talented threat actors to execute complex cyber campaigns against political enemies. The Big Four are known for their pervasive assertions of soft power, otherwise known as ‘Influence Cyber Operations’ (ICOs). Each has a flagship operation in this field: Russia with disinformation campaigns, China with long-term operational technology espionage, North Korea with remote worker scams and laptop farms, and Iran with critical infrastructure disruptions. The “gray area” of plausible deniability Iran’s use of proxy insurgent groups follows a clear line of logic. A radical activist organization would be expected to execute politically motivated attacks, but not on a large scale or with exceptional technical skill. In the case of a group like Handala, openly proclaiming to be pro-Iranian nationalists aligns their interests with the Iranian government, making them a perfect cover for state-backed operations. It’s a strategy that allows for symbolic retributive actions by Iran without having to reveal the extent of its tactical power, and — crucially — one that allows for attacks to continue in times of supposed peace. This “death by a thousand cuts” approach — sometimes referred to as “soft warfare” or “gray warfare” — follows a military doctrine centered around a consistent, slow erosion of the enemy via covert operations. Obscuring the state’s involvement beneath a grandiose, pro-Iranian rhetoric allows it to affect change in the US with less chance of immediate retaliation, especially compared to an act of direct physical aggression, such as an overseas bombing on US soil. A state of perpetual interference To understand how proxy insurgent groups such as Handala fit within Iran’s modern-day intelligence ecosystem, we first need to look at the historical development of the country’s intelligence operations. In 1953, the United States and Britain (via conduit operations of the CIA and MI6, respectively) instigated a coup in Iran that displaced then-Prime Minister Mohammad Mosaddegh in favor of strengthening the imperialist power of its Shah, Mohammad Reza Pahlavi. The US hoped that by bolstering Iran’s monarchical leader in exchange for underlying influence in a newly pro-Western regime, it would be able to gain access to Iran’s rich petroleum resources. Part of this influence included the establishment and shaping of SAVAK in 1957, the first intelligence agency and secret police of the Imperial State of Iran. Despite being classed as a civilian organization, SAVAK was primarily composed of military figures whose objectives involved suppressing opposition, surveillance of threats to the monarchy and media control within Iran, often operating outside existing laws. When the group was violently dismantled following the 1979 Iranian Revolution, its replacement MOIS — still the country’s dominant intelligence organization — borrowed significantly from its personnel, core philosophy and tactics. All current Iranian entities involved in intelligence are technically required to report to and collaborate with MOIS, including the Islamic Revolutionary Guards Corps (IRGC), which was notably created directly in response to the first Supreme Leader’s suspicions of Iran’s existing military forces. Iran’s modern-day intelligence capabilities have ultimately formed from a mishmash of competing outfits. This includes MOIS, the Islamic Revolutionary Kumitehs, SAVAMA, the IGRC and its paramilitary force the IRGC-QF, all of which were established to support various pro-revolutionary and counterintelligence directives at the end of the 1970s and throughout the 1980s. In short, Iran’s cyber ecosystem has been shaped by decades of political upheaval, revolutionary factioning and calculated external influence. The protective front of a “pro-revolutionary” ideology, therefore, has long been used by the Iranian state to justify acts of political violence, espionage, surveillance and subterfuge. What do these groups actually represent? Western perceptions of groups such as Handala Hack Team and CyberAv3ngers are likely distorted by culturally based assumptions. In the US, for example, we tend to associate terms like “insurgent” with anti-authoritarians, not government loyalists. However, historically in Iran, civilian and military intelligence enterprises have been simultaneously enmeshed and compartmentalized by design. While there hasn’t been much discussion of the semantics in this scenario to-date, there’s no real qualifier preventing Handala from technically being considered a “radical hacktivist group” while also being a highly intentional product of the state. Whether they actually carry the values that they espouse publicly is anyone’s guess. Think of it this way: a radical activist organization is created to fight whatever it deems as an “oppressive system”, using symbolic direct action to compensate for its lack of size. And while Iranian APT groups are well-resourced domestically, in a global arena, they are still undeniably small. When held next to cyber superpowers like the US and Israel, even Iran’s most elite task forces are microscopic by comparison. A captive audience Experts have noted that Handala’s social media posts often contain exaggerated, near-theatrical claims. One blog post reads: “The slightest aggression against Iran’s vital facilities will mean the beginning of a devastating reaction that will turn all these vital infrastructures to ashes.” The group makes constant, unsubstantiated threats with claims of successful breach operations that quickly fade into the ether, never to be backed with evidence. However, to dismiss Handala’s evangelizing as laughable is missing the point — intentionally or not, Handala’s outsized assertions of its own power to retaliate against its aggressors highlight just how asymmetric the whole conflict really is. If nothing else, readers of Handala Hack’s messaging — conveniently written in English — are forced to grapple with the reality of a massive power imbalance between “us” and “them” just to figure out how safe they are allowed to feel. Americans engaging with Handala’s threats will likely feel alarmed, with that fear quickly turning to frustration that random American businesses are being symbolically attacked on behalf of entire industries due to Iran’s limited targeting capabilities. Suddenly, the imminent specter of Iran as presented by the US begins to fall apart. This is the true advantage of a state entity adopting a radical persona, particularly one with an air of “righteous fury” or a “bleeding heart”. Many have accused Handala of falsely claiming to be a pro-Palestinian group, but from a strategic standpoint, they are, because they are explicitly and violently anti-Israel — for a group with such radical political goals, sometimes ideology just means having a shared enemy. Beneath their seemingly unshakeable veneer, however, it’s only becoming clearer that Handala’s words are those of a state in crisis, one which has been hampered by sanctions into near technological autarky and that is literally struggling to keep the lights on thanks to repeated sieges of its own critical infrastructures. Lest we forget, the “world’s first cyberweapon”, Stuxnet, was created as a joint US-Israeli venture for the express purpose of destroying Iran’s nuclear program by targeting its SCADA and PLC systems. When the US warns that Iran is capable of targeting those same systems, it is merely positioning Iran as an enemy that is capable of doing to us exactly what we are to them. Although its motivations are ultimately multilayered and complex, Handala/the Iranian state’s “goal” is likely not simple fear-mongering. It’s to cause embarrassment, eroding the public’s good faith assumptions of its leaders’ motivations in the Global East as their actions are brought to light. Given the group’s level of media coverage for its minor hacking feats, who’s to say that things aren’t going as planned? This article is published as part of the Foundry Expert Contributor Network.Want to join?
CISA Adds 8 Exploited Flaws to KEV, Sets April-May 2026 Federal Deadlines
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Monday added eight new vulnerabilities to its Known Exploited Vulnerabilities (KEV) catalog, including three flaws impacting Cisco Catalyst SD-WAN Manager, citing evidence of active exploitation. The list of vulnerabilities is as follows - CVE-2023-27351 (CVSS score: 8.2) - An improper authentication vulnerability in PaperCut
SGLang CVE-2026-5760 (CVSS 9.8) Enables RCE via Malicious GGUF Model Files
A critical security vulnerability has been disclosed in SGLang that, if successfully exploited, could result in remote code execution on susceptible systems. The vulnerability, tracked as CVE-2026-5760, carries a CVSS score of 9.8 out of 10.0. It has been described as a case of command injection leading to the execution of arbitrary code. SGLang is a high-performance, open-source serving
⚡ Weekly Recap: Vercel Hack, Push Fraud, QEMU Abused, New Android RATs Emerge & More
Monday’s recap shows the same pattern in different places. A third-party tool becomes a way in, then leads to internal access. A trusted download path is briefly swapped to deliver malware. Browser extensions act normally while pulling data and running code. Even update channels are used to push payloads. It’s not breaking systems—it’s bending trust. There’s also a shift in how attacks run.
Attackers abuse Microsoft Teams to impersonate the IT helpdesk in a new enterprise intrusion playbook
Attackers are increasingly exploiting enterprise collaboration platforms such as Microsoft Teams to gain initial access, impersonating IT helpdesk staff and persuading employees to grant remote control, according to new research from Microsoft. In a blog post, Microsoft described a “cross-tenant helpdesk impersonation” technique in which threat actors initiate conversations with employees via Teams’ external access feature. “Attackers use social engineering to convince users to grant access,” Microsoft said, noting that the approach allows adversaries to operate within trusted communication channels and bypass traditional phishing defenses. Unlike conventional phishing or exploit-driven attacks, the technique relies on what Microsoft characterizes as user-approved access. Victims are persuaded to initiate remote sessions, often using legitimate tools, effectively handing control to attackers without triggering typical malware-based detections, the blog post said. Shift to collaboration apps While the technique may appear new, analysts say it reflects an evolution rather than a reinvention of attack methods. “From my perspective, this is more an evolution of existing social engineering tactics than a fundamental shift,” said Prabhjyot Kaur, senior analyst at Everest Group. “The underlying objective hasn’t changed. Attackers are still exploiting user trust and urgency to gain initial access. What is changing is the channel.” As platforms such as Teams become central to workplace communication, attackers are following users into those environments. Unlike email, these platforms enable real-time engagement, making impersonation of IT or helpdesk staff more convincing. Kaur said collaboration platforms enable real-time interaction, making impersonation of IT or helpdesk staff more convincing than email-based phishing. “So rather than replacing phishing, this expands the attack surface and makes social engineering more operationally effective,” Kaur said. Offering a sharper view of the shift, Sanchit Vir Gogia, chief analyst at Greyhound Research, said the change is less about channel and more about how attacks unfold. “Phishing asked for attention. This model demands participation,” he said. “Attackers are inserting themselves into legitimate workflows and guiding users step by step through actions that grant access,” Gogia added, describing it as a move toward “guided execution” rather than simple deception. Microsoft’s findings follow earlier incidents in which attackers used Teams chats and calls to impersonate IT support and initiate remote access. Cross-tenant risk grows The attack chain uses Teams’ cross-tenant communication capability, which allows external users to initiate chats with employees, Microsoft wrote in the blog. “The cross-tenant risk is significant, and many organizations probably do underestimate it,” said Sunil Varkey, advisor at Beagle Security. “Collaboration tools were designed to reduce friction, but many organizations enabled that convenience before fully applying Zero Trust controls,” Varkey said. “The sustainable approach is to keep the business value of these platforms while treating every external interaction, support request, and access approval as something that must be verified, limited, and monitored.” He compared the risk to a physical security gap. Allowing anyone into a lobby should not mean they can walk employees to restricted areas and request access. Kaur added that many enterprises still treat collaboration platforms primarily as productivity tools rather than part of their attack surface. “Cross-tenant access is necessary for business, but it introduces a trust boundary that is often not well understood or tightly controlled,” she said. Gogia said the issue is rooted in how trust is applied in modern environments. “External actors can now initiate interactions inside environments that employees associate with internal coordination,” he said, adding that this creates a “false sense of safety.” Detection becomes harder Microsoft said attackers use legitimate administrative tools and remote access utilities after gaining entry, making activity harder to distinguish from normal operations. Because attackers use legitimate tools and approved workflows, “there’s very little that looks overtly malicious in isolation,” Kaur said. “These attacks blend into normal IT operations.” Microsoft also noted that attackers rely on native administrative tools and legitimate data transfer utilities to move laterally and exfiltrate data while appearing as routine activity. This shifts the focus toward behavioral detection. “Security teams should prioritize detecting sequences of activity,” Kaur said, pointing to patterns such as an unsolicited external Teams interaction followed by remote support activity and lateral movement. Gogia said this requires a shift in detection approach. “These attacks do not rely on exploits. They rely on sequence,” he said. “Each individual action appears legitimate. The compromise emerges only when those actions are connected.” Varkey added that defenders need to move beyond traditional indicators. “Because these attacks rely on legitimate tools and user-approved actions, security teams need to focus on context and behavior, not just malware,” he said. Tighter controls needed To reduce risk, experts say organizations need stronger governance over collaboration environments. “Collaboration platforms are often configured for convenience first, with easy external chat, calls, screen sharing, and remote assistance, without fully considering how those features can be abused together,” Varkey said. Kaur emphasized the need for integrated visibility. “The most effective defenses will come from integrating collaboration, identity, endpoint, and SOC visibility rather than treating them as separate layers,” she said. Recommended measures include tightening external access controls, restricting remote-support tools to approved workflows, enforcing conditional access and multi-factor authentication, and improving user awareness around how legitimate IT support interactions occur, Microsoft wrote.
Hackers exploit Vercel’s trust in AI integration
Frontend cloud platform Vercel, the creator of Next.js and Turbo.js, has warned about a data breach after a compromised third-party AI application abused OAuth to access its internal systems. A Vercel employee used the third party app, identified as Context.ai , which allowed the attackers to take over their Google Workspace account and access some environment variables that the company said were not marked as “sensitive.” “Environment variables marked as “sensitive” in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed,” Vercel said in a security post. The incident compromised what the company described as a “limited subset” of customers whose Vercel credentials were exposed. These customers have now been reached out with requests to rotate their credentials, Vercel said. According to reports surfacing on the internet, a threat actor claiming to be the Shinyhunters began attempting to sell the stolen data, which allegedly include access key, source code, and private database, even before Vercel confirmed the breach publicly. Hacking the access Vercel’s disclosure confirmed that the initial access vector was Google Workspace OAuth tied to Context.ai. Once the application was compromised, attackers inherited the permissions granted to it, including access to Vercel employee’s account. It remains unclear whether Context.ai’s infrastructure was compromised, OAuth tokens were stolen, or a session/token leak within the AI workspace enabled attackers to abuse authenticated access into Vercel’s environments. Context.ai did not immediately respond to CSO’s request for comments. “We have engaged Context.ai directly to understand the full scope of the underlying compromise,” Vercel said in the post. “We assess the attacker as highly sophisticated based on their operational velocity and detailed understanding of Vercel’s systems. We are working with Mandiant, additional cybersecurity firms, industry peers, and law enforcement.” Vercel has urged its customers to review activity logs for suspicious behavior and to rotate environment variables, especially any unprotected secrets that may have been exposed. It also recommended enabling sensitive variable protections, checking recent deployments for anomalies, and strengthening safeguards by updating deployment protection settings and rotating related tokens where needed. Sensitive secrets, including API keys, tokens, database credentials, and signing keys, that were not marked as “sensitive” should be treated as potentially exposed and rotated as a priority, Vercel emphasized. For users in panic, Vercel has offered an shortcut. “If you have not been contacted, we do not have reason to believe that your Vercel credentials or personal data have been compromised at this time,” the post reassured. Allegedly breached by ShinyHunters According to screenshots circulating on the internet, a threat actor has already claimed the breach on the dark web and is attempting to sell the spoils. “Greetings All, Today I am selling Access Key/ Source Code/ Database from Vercel company,” the actor said in one of such posts. “Give me a quote if you’re interested. This could be the largest supply chain attack ever if done right.” The data was put up for $2 million on April, 19. The threat actor can be seen using a “BreachForums” domain in the screenshot, claiming (not explicitly) to be Shinyhunters themselves, one of the operators of the notorious hacksite. Other giveaways include a Telegram channel “@Shinyc0rpsss” and an email id “[email protected]” mentioned in the post. While recent incidents have hinted at ShinyHunters resurfacing after  takedowns and alleged arrests, it remains likely that this is an imposter leveraging the name to lend credibility, something that has precedent.
Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain
Cybersecurity researchers have discovered a critical "by design" weakness in the Model Context Protocol's (MCP) architecture that could pave the way for remote code execution and have a cascading effect on the artificial intelligence (AI) supply chain. "This flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation, granting attackers direct access to
CISOs reshape their roles as business risk strategists
Nitin Raina’s career history resembles that of many CISOs: He worked in IT infrastructure, operations, and services before moving into security and advancing through the ranks. He’s now global chief information security officer at technology consultancy Thoughtworks. But in a less common professional move Raina also picked up the role of global head of enterprise risk, a position he has held at Thoughtworks since 2020. He earned the job, he says, because of his ability and propensity to talk “about risk in totality.” After taking the position, Raina established the enterprise risk management function, which he now oversees. The function identifies and mitigates strategic, operational, and cybersecurity risks throughout the organization, and performs in-depth risk assessments and gap analyses to uncover vulnerabilities and inefficiencies within critical business processes, systems, and controls. Raina says heading enterprise risk is a natural fit for him as CISO, which is why he believes the two roles should be paired more frequently. “The risk conversation, as CISOs, we can lead that,” Raina says. “We have the ability and the forum in which we can raise it.” Most CISOs don’t hold a risk title, as Raina does, yet researchers, executive advisers, and other security leaders say CISOs are increasingly taking on more enterprise risk management tasks. It’s a logical expansion, these experts say. CISOs have been coached for years to identify how cyber risks pose business risks and to understand which risks represent the biggest risks to the enterprise, whether the impact of any of those exceed the organization’s tolerance for risks, and if so by how much. That CISO work is more critical than ever, they further assert. Nearly all business operations have become digital. That fact makes any cyber risk a material risk to the business, and it makes resiliency an operational imperative today. As such, the CISO should be a key player in assessing and managing business risk. “CISOs had once been focused on IT and cybersecurity risk. They’d ask, ‘What are the risks I have for platforms, applications, systems, the tech stack?’ It was a very flat plane,” says Paul Caron, global managed services lead and head of cybersecurity for the Americas at S-RM, a global corporate intelligence and cybersecurity consultancy. “But it has evolved in the past few years, and now CISOs are being pulled into new areas. They’re being asked, ‘What are the risks to the business?’” CISOs lead the way on risk In the 2026 CISO Report from data platform maker Splunk, 78% of CISOs reported joint accountability with other technical C-suite leaders (CIO, CTO, etc.) for security operational business risk, 56% have that joint accountability with CEOs, and 29% have joint accountability with other C-suite roles (CFO, chief legal officer, etc.). The report also found that 96% of CISOs are now responsible for AI governance and risk management. Meanwhile, the CyberRisk Alliance’s Q1 2026 CISO Top 10 report found that governance, risk, and compliance is the top priority for CISOs today. The report says this reflects GRC’s “role as the primary mechanism through which cybersecurity earns executive and board trust.” The report also notes that “organizations are under pressure to prove that risk oversight is continuous, defensible, and integrated into enterprise decision-making. CISOs are increasingly expected to unify regulatory obligations, enterprise risk tolerance, and security controls into a coherent operating model that supports real-time governance.” Evolving risks require a new CISO leadership profile The shift to CISO as a risk position, and not one limited to technical and cybersecurity alone, has been years in the making. But it has accelerated since the arrival of ChatGPT in late 2022, as organizations embraced first generative AI and more recently agentic AI. That’s because AI melds with the business process, whereas prior technologies only enabled business processes. That melding raises the stakes and makes cyber, digital, and business risk nearly synonymous. That evolution has pushed the CISO deeper into risk assessment and management, and it requires a different type of CISO than those of the past. “CISOs cannot walk around and make decisions based on fear or compliance. They must now be able to talk about risk in business terms. They need to understand that risk is a business conversation,” says Leon DuPree, lecturer at Eastern Michigan University’s School of Information Security and Applied Computing. Leading CISOs do this by quantifying both risk and the ROI of their options to address those risks, DuPree says, noting that many use the Factor Analysis of Information Risk (FAIR) model to understand and position cyber and operational risk in financial terms. “That’s the direction that CISOs are trying to go, so they can facilitate change and innovation working from ROIs for all the dollars being spent on security assets and risk mitigation,” he adds. S-RM’s Caron sees more CISOs taking this approach. For example, he says more security chiefs are being tasked with assessing and modeling risks associated with the AI uses within their organizations and reporting how those risks impact business processes — not just data integrity and IT systems. To perform such duties, CISOs must use more of their executive skills than their cyber acumen, Caron says. They must identify risks that come with the deployment of AI and other technologies, quantify those risks in business terms, offer mitigation strategies, quantify how each mitigation option reduces business risks, and help prioritize risk-related tasks based on expected returns and business objectives. “It takes more of a business leader’s lens than a very technical lens. So CISOs now have to be the ones responsible for steering the conversation into directions that show they’re a partner with the business to accelerate growth,” he explains. “The businesses of today are demanding more and more a business CISO.” Caron acknowledges that it’s a significant demand, one that requires CISOs to expand their knowledge base beyond technical and even compliance to business operations, enterprise strategy, and market conditions. “I think that’s where CISOs needs to start going, not necessarily where they are today,” he adds. “Many do still struggle with the mental shift it takes.” A question of appetite Steve Martano, an IANS Research faculty member and a partner in Artico Search’s cybersecurity practice, says the majority of CISOs rise through the technical and engineering ranks, so many still find enterprise risk assessment and management novel tasks. But, like Caron, he says it’s now part of the gig. “I think understanding how emerging tech impacts the organization’s risk profile is something they must do, and I think the conversation around enterprise risk is always something security practitioners should be striving for when they communicate,” he says. But Martano, like others, also says CISOs do not have — nor should they assume — ownership over establishing the organization’s risk appetite. “It’s not the CISOs job to revisit the risk posture itself. It’s not the CISO’s job to say, ‘We’re operating too loose,’” Martano says. Instead, CISOs must possess “a good understanding of what the organization thinks is inbounds and out-of-bounds” so they can “flag how technologies, processes, and tools could have an effect on the risk posture,” he says. “The CISO is the adviser.” Boards expect CISOs to be capable of identifying and assessing current and future risks as well as advising on whether to mitigate, transfer, insure against or accept those risks, he adds. That may be more challenging now than ever, with technology, AI, and enterprise use of them swiftly evolving. “The best CISOs think about risks that are around the corner. They have to have a pulse on where things are going,” Martano adds. “They don’t have to be visionary; but they do need to be proactive by engaging more outside their four walls, engaging with vendors, information-sharing with their peers, having a pulse on the macro level. The more they diversify what they’re hearing, the better, so they can bring nuggets of information to their boards and executive teams to discuss and how those affect their own organization’s risk culture.”
Fracturing Software Security With Frontier AI Models
Unit 42 finds frontier AI models enhance vulnerability discovery, acting as full-spectrum security researchers. They enable autonomous zero-day discovery and faster N-day patching. The post Fracturing Software Security With Frontier AI Models appeared first on Unit 42.
Copilot & Agentforce offen für Prompt-Injection-Tricks
KI-Agenten sind populär – und anfällig dafür, missbraucht zu werden.DC Studio / Shutterstock KI-Agenten fürs Enterprise können bekanntlich Arbeitsabläufe optimieren. Aber auch die Datenexfiltration – wie Sicherheitsforscher von Capsule Security herausgefunden haben. Sie haben sowohl in Microsoft Copilot Studio als auch Salesforce Agentforce Prompt-Injection-Schwachstellen entdeckt. Diese ermöglichen Angreifern in beiden Fällen schadhafte Befehle über scheinbar harmlose Prompts einzuschleusen – mit potenziell verheerenden Folgen. Copilot leakt Sharepoint-Daten Beim „ShareLeak“ getauften Problem auf Microsoft-Seite liegt der Knackpunkt darin, wie Copilot-Studio-Agenten SharePoint-Formulare verarbeiten. Der Angriff beginnt mit einem manipulierten Payload, der in ein Standard-Formularfeld (etwa „Kommentare“) eingefügt wird. Diese fließt später im Rahmen seines operationellen Kontexts in den KI-Agenten ein. Weil das KI-System Benutzer-Inputs mit System-Prompts verknüpft, überschreibt der „injizierte“ Payload die ursprünglichen Anweisungen des Agenten. Das KI-Modell behandelt damit die Anweisungen eines Angreifers als legitime System-Direktiven – der schadhafte Input wird ohne jegliche Widerstände vom Agenten ausgeführt. Sobald ein Agent auf diese Art und Weise kompromittiert wurde, ist es demnach auch möglich, auf verbundene Sharepoint-Listen zuzugreifen, sensible Kundendaten zu extrahieren und diese per E-Mail zu versenden. Wie die Forscher feststellten, wurden Daten selbst dann exfiltriert, wenn die Sicherheitsmechanismen von Microsoft verdächtiges Verhalten meldeten. „Die Hauptursache dafür ist, dass es keine zuverlässige Trennung zwischen vertrauenswürdigen Systemanweisungen und nicht vertrauenswürdigen Benutzerdaten gibt. In der bestehenden Konfiguration kann die KI das nicht voneinander unterscheiden“, so die Sicherheitsexperten. Microsoft hat inzwischen einen Patch veröffentlicht, der das Problem behoben hat. Und die Sicherheitslücke mit einem Schweregrad von 7,5 von 10 auf der CVSS-Skala bewertet. Seitens der Benutzer sind keine weiteren Maßnahmen erforderlich. Lead-Formulare kapern Agentforce Im Fall von Salesforce Agentforce konnten die Forscher von Capsule maliziöse Instruktionen in ein öffentlich zugängliches Lead-Formular einbetten, die im Anschluss über einen „Agent Flow“ mit E-Mail-Funktionen ausgeführt wurden. Weist ein interner Benutzer einen Agentforce-Agenten später an, diesen Lead zu überprüfen oder zu verarbeiten, führt dieser die Anweisungen aus und exfiltriert sensible Daten. „Das resultiert in einer nicht-autorisierten Datenoffenlegung und potenziell massenhafter Exfiltration von CRM-Daten“, schreiben die Forscher. Massenhaft deswegen, weil sich die Kompromittierung nicht auf einen einzelnen Datensatz beschränkt: Laut den Capsule-Experten kann ein gekaperter Agent mehrere Lead-Datensätze gleichzeitig abfragen und exfiltrieren, wodurch eine einzelne Formularübermittlung effektiv zur Datenbank-Extraktions-Pipeline werde. Den Forschern zufolge habe Salesforce das Prompt-Injection-Problem zwar anerkannt, den Exfiltrations-Vektor jedoch als „konfigurationsspezifisch“ eingestuft und auf optionale Human-in-the-Loop-Kontrollen verwiesen. Die Sicherheitsforscher von Capsule widersprechen dieser Darstellung und argumentieren, dass manuelle Genehmigungen den eigentlichen Zweck autonomer Agenten untergraben. Das eigentliche Problem, so die Forscher, seien unsichere Standardeinstellungen. Für die Automatisierung konzipierte Systeme sollten es demnach nicht zulassen, dass nicht-vertrauenswürdige Inputs die Ziele der Agenten neu definieren können. Was Unternehmen tun sollten Beide Sicherheitslücken laufen auf eine Grundvoraussetzung hinaus: Sämtliche externe Inputs sollten als nicht vertrauenswürdig behandelt werden. Und: Filter einzurichten, die Daten von Anweisungen trennen, ist zu empfehlen. Dies würde auch bedeuten, folgende Maßnahmen durchzusetzen: Input-Validierung, Least-Privilege-Zugriff, sowie strikte Kontrollmaßnahmen für Dinge wie ausgehende E-Mails. (fm)
Researchers Detect ZionSiphon Malware Targeting Israeli Water, Desalination OT Systems
Cybersecurity researchers have flagged a new malware called ZionSiphon that appears to be specifically designed to target Israeli water treatment and desalination systems. The malware has been codenamed ZionSiphon by Darktrace, highlighting its ability to set up persistence, tamper with local configuration files, and scan for operational technology (OT)-relevant services on the local subnet.
Last updated: 2026-04-23 12:41:18 | Next auto-update in: 15:00