DVIUS INTELLIGENCE

Real-Time Cyber Attack Monitoring

THREAT INTELLIGENCE FEED

[ LIVE THREAT DASHBOARD ]

20,125
ACTIVE THREATS
3,265
CRITICAL
3,920
RANSOMWARE
12
SOURCES
DVIUS AI: Advanced Threat Intelligence and Machine Learning Defense
DVIUS AI represents a groundbreaking advancement in cybersecurity threat intelligence. Our proprietary machine learning algorithms analyze global threat data in real-time, identifying patterns and anomalies that traditional security systems often miss. The system processes billions of data points daily, leveraging deep neural networks to provide unprecedented visibility into evolving cyber threats. Recent deployments have demonstrated remarkable effectiveness with 99.7% accuracy in threat detection and a 68% reduction in false positives compared to conventional solutions. The autonomous response capabilities can contain threats within milliseconds, significantly reducing potential damage to enterprise systems. As cyber threats continue to evolve in sophistication, DVIUS AI's adaptive learning capabilities ensure continuous improvement in defensive strategies. The platform represents the future of intelligent, automated cybersecurity defense.
React2Shell Exploitation Delivers Crypto Miners and New Malware Across Multiple Sectors
React2Shell continues to witness heavy exploitation, with threat actors leveraging the maximum-severity security flaw in React Server Components (RSC) to deliver cryptocurrency miners and an array of previously undocumented malware families, according to new findings from Huntress. This includes a Linux backdoor called PeerBlight, a reverse proxy tunnel named CowTunnel, and a Go-based
.NET SOAPwn Flaw Opens Door for File Writes and Remote Code Execution via Rogue WSDL
New research has uncovered exploitation primitives in the .NET Framework that could be leveraged against enterprise-grade applications to achieve remote code execution. WatchTowr Labs, which has codenamed the "invalid cast vulnerability" SOAPwn, said the issue impacts Barracuda Service Center RMM, Ivanti Endpoint Manager (EPM), and Umbraco 8. But the number of affected vendors is likely to be
How can staff+ security engineers force-multiply their impact?
Staff+ engineers play a critical role in designing, scaling and influencing the security posture of an organization. Their key areas of expertise include developing security strategy and governance, incident response leadership, automation, compliance/risk management and cross-org collaboration to shape security culture. Together, these capabilities are essential to enhance application security and the effectiveness of their organizations. However, in our experience, we have seen that many staff+ security engineers face scaling challenges. Instead of leveraging their expertise to drive broad, cross-stack impact, they tend to concentrate on specific incidents or focus areas, which limits their ability to extend their influence and strategic reach. Such a scaling problem has consequences on the organization and its personal goals.  Also, leadership considers staff+ engineers as trusted advisors, helping them make high-judgment decisions. However, when engineers tend to get stuck on specific tactical incidents or solutions, leaders are left without their strategic insights. Conversely, staff+ engineers who are too busy in the weeds, miss to proactively look out for their “leaders’ problems.” Leaders perceive these engineers as too busy and hesitate to increase their scope and loop them in broader discussions, which ultimately leads to missed opportunities for the staff+ security engineers.  There are plenty of practices that staff+ engineers can adopt to enable them to scale and force-multiply their impact across their organization. Remember, you, as a staff+ security engineer, are ultimately an enabler, not a bottleneck! Practical ideas to help you scale One of the most common ideas in people management, “scaling through others,” is well-applicable to staff+ security engineers. It basically means amplifying your impact not by doing more work yourself, but by enabling many others to work more effectively and productively with your influence. In other words, you’ll not do best by being a hero, but by creating “mini you’s” across the organization. When applied with discipline, scaling through others’ work well in practical settings. Here are some ideas you can consider:  Create mechanisms that allow you to scale Mechanisms enforce or reinforce a behavior automatically. Also, they are not one-size-fits-all, but with some trial and error, we have observed that strong mechanisms consistently support desired behavior. For example, a policy-as-code framework integrated into CI/CD pipelines automatically enforces security and compliance policies, reducing manual checks and human error. While this is an example of a technical mechanism (which we will talk more about in the next section), mechanisms can also be people-oriented, say, mentorship programs or mentorship trees. Determine where to dive deeper and where to delegate Being an expert in the area, staff+ engineers can pretty much dive in anywhere from critical incidents to strategic initiatives. They may be drawn in by urgent team needs, their own curiosity or something else. But it is crucial for them to carefully evaluate where to make and avoid costly commitments. Asking a set of targeted questions can provide valuable insight: “What is the potential impact on security posture or risk to the organization?”; “Is there an established process or tooling (‘paved path’) to address this?”; and “Is this a one-time incident or a recurring security challenge that requires a scalable, strategic solution?” Often, true learning comes from failure. If the risk is manageable, allow others to step up and learn from their own failures. Create a trusted group To scale through others, you will need a group to rely on. Some organizations solve this problem via job levels, where staff+ engineer roles are defined to scale through other roles like senior security engineers. In other cases, you might need to define your selection criteria and training path. Just creating this group is not enough; an action plan and thorough execution are critical. In practice, such working groups run brown-bag sessions, create mentorship and recognition programs and discuss/review solutions that help lift the organization’s security KPIs. Additionally, mentorship sessions and office hours from staff+ engineers help build working relationships that last. Employ non-security engineers to the cause Involving application engineers in the security cause is an often-overlooked “hack” that works well in industrial settings. This “shift-left” approach involves embedding security practices directly into the software development pipeline, enabling development teams to take ownership of security controls and assessments early in the lifecycle. Programs such as security champions or security reviewers empower application engineers to integrate standard security and compliance practices as part of their regular workflows, reducing bottlenecks and fostering a security-first mindset. Staff+ security engineers should look for opportunities to drive the creation of these programs, enable cross-functional collaboration and scale through application engineers to increase their impact.  Eliminate anti-patterns Lastly, we would recommend staff+ security engineers to inspect and eliminate anti-patterns in their (and peer) organizations. These anti-patterns work against scaling and make them and their organizations bottlenecks, instead of enablers. One example we have commonly seen is when security engineers act as permanent gatekeepers. This “block by default” approach is expensive and needs significant time investment for staff+ engineers and slows down business. Similarly, policies without exceptions are a time drain for both security and application teams. We highly recommend staff+ security engineers to proactively identify such patterns and replace them with mechanisms. Technical mechanisms to consider To effectively scale their impact, staff+ security engineers should champion a comprehensive technical approach that integrates secure practices into every layer of the organization, technology and culture. This ultimately acts as a mechanism or guardrails for their organizations, ensuring their guidance is automatically enforced and allowing them time for strategic influence. Key elements include: Incorporate focused action areas in the organization-wide security strategy: While staff+ security engineers are responsible for developing a clear, actionable security strategy, we recommend they encompass policy-as-code enforcement, risk gates and continuous monitoring. Leverage the trusted group to assign ownership by appointing area leaders who drive accountability and progress within their domains. Review their findings and tune the strategy periodically. This helps staff+ engineers avoid the need to inspect every aspect of a large organization’s security strategy. Adopt reference architectures and secure-by-default reusable modules: We recommend staff+ security engineers to build and provide trusted, opinionated blueprints, golden images, baseline policies and reusable components that make secure design the path of least resistance for development teams. Building such “paved paths” enables seamless and secure development for teams without developer whiplash. Finally, using trusted groups to drive adoption, they can effectively influence teams’ technical direction. Shift-left security practices: Briefly discussed before, integrating security early in the development lifecycle is the central theme of modern-day DevSecOps practices. Embedding automated controls, threat modeling and validation tools into pull requests, CI/CD pipelines and infrastructure-as-code (IaC) plans enables developers to catch and fix issues before deployment without workflow disruption. Consequently, this allows staff+ engineers (and their organizations) to reduce security bugs that reach production. Leverage AI-driven scanning tools and automation cautiously: The rapid development of GenAI has unlocked significant capabilities in security tooling. AI tools are now available that strengthen security practices through adaptive learning, risk prioritization and context-aware detection. Staff+ security engineers should champion the adoption of these tools to enhance vulnerability detection and streamline workflows. Supplementing these tools with expert reviews helps mitigate false positives and assess the impact of security vulnerabilities effectively. Guardrails over Gates: We recommend staff+ security engineers to build checks that enforce blocking only on high confidence and high impact security signals, while warning or logging lower risk issues to maintain velocity. Using compensating controls like monitoring, automated remediation and risk scoring to manage risks without blocking progress. The overall guiding principle we recommend for all staff+ security engineers is to make the secure way the easiest and most intuitive path for all engineers; this helps security to scale sustainably alongside business growth. We believe this guiding principle, along with the above technical framework, enables staff+ security engineers to force-multiply their impact by embedding robust security foundations, fostering a culture of shared ownership and automating enforcement. The result is a resilient, scalable and developer-friendly security posture. Incident influence So, how can staff+ security engineers force-multiply their impact during active security incidents? The most critical tool that the engineer has in such a scenario is their mindset: “You’re the stabilizer, not the savior.” Take the role of an orchestrator: If you get too deep into the logs, other areas that need support will suffer. Look to assign tactical work to different individual contributors and focus on leading the incident, coordinating across roles and managing leadership communications.  Next, it is critical to identify inflection points. You will be expected to make high-velocity, high-judgement decisions that decide the course of incident management. Determine thresholds beyond which upper leadership involvement or additional support is essential. Utilize the inflection points to guide you when to move from containment to recovery to retrospective. Once the situation is in control, switch to an influencer role and scale through others, in line with your standard engagement mechanisms. Act as a bridge between leadership and teams Lastly, note that you are a link between management/leadership and engineers on the ground. Managers may not fully understand the details of execution or delays in identifying/remediating vulnerabilities in the software. Teams will rely on you to identify and bridge process gaps or represent them to leadership for decision-making. For example, in our case, our team was hesitating to adopt a powerful static analysis tool. While the team identified it as a critical need, it had high licensing costs, leading to multiple back-and-forth discussions. When our principal staff engineer learned about it, she promptly created a one-page document with the pros and cons and aligned leaders on funding it due to the high return-on-investment. She resolved a two-week team debate and analysis in just one afternoon.  Conversely, you are also the leadership’s representative on the ground, shepherding the team along the leadership’s direction. Consider influencing the teams in building and reviewing deep visibility dashboards that accurately capture key security insights. This provides leaders with a strong feedback loop and real-time visibility on the consequences of their decisions. Final thoughts The journey of a staff+ security engineer is about transitioning from individual contributions to a force multiplier. This is especially important as AI and automation redefine scale; the leaders who design for empowerment will define the next era of cybersecurity engineering.  This article is published as part of the Foundry Expert Contributor Network.Want to join?
Hundreds of Ivanti EPM systems exposed online as critical flaw patched
Ivanti has patched a critical vulnerability in Endpoint Manager that enables attackers to hijack administrator sessions without authentication and potentially control thousands of enterprise devices. The company released EPM version 2024 SU4 SR1 to address four vulnerabilities, including the critical flaw tracked as CVE-2025-10573, which carries a CVSS score of 9.6. Three additional high-severity flaws could also enable code execution but require user interaction, Ivanti said in its December security advisory on Tuesday. Ivanti said the vulnerabilities were reported through its responsible disclosure program, adding that it was not aware of any customer systems being exploited at the time of disclosure. EPM has been targeted before. In March, CISA added three EPM vulnerabilities to its Known Exploited Vulnerabilities catalog after confirming exploitation in the wild. The flaws had been patched in January after being reported privately to Ivanti. Given EPM’s history of being targeted by attackers and the severity of the flaw, security teams should treat this as a patch-immediately situation rather than a routine update. The December update also fixed CVE-2025-13659 and CVE-2025-13662, which allow attackers to execute arbitrary code when users connect to an untrusted core server or import untrusted configuration files. Another enables unauthorized file writes on the server. Unauthenticated attack vector The most severe vulnerability is a stored cross-site scripting flaw discovered by Ryan Emmons, staff security researcher at Rapid7, who reported it to Ivanti in August. According to Rapid7’s technical disclosure, also published Tuesday, attackers can submit malicious device scan data to EPM’s incoming data API without authentication, The malicious data gets processed and embedded in the EPM web dashboard, where it executes when administrators view affected pages. “An attacker with unauthenticated access to the primary EPM web service can join fake managed endpoints to the EPM server in order to poison the administrator web dashboard with malicious JavaScript,” Emmons wrote in the report. Once the malicious JavaScript executes, attackers gain control of the admin session with full privileges to remotely control endpoints and install software on devices. Nick Tausek, lead security automation architect at Swimlane, warned, “Exploitation of this flaw would grant threat actors access to many managed devices at once, allowing for the execution of malicious code, deployment of ransomware, or exfiltration of sensitive data.” The patching challenge Despite the severity of such threats, organizations frequently struggle to address critical vulnerabilities quickly: Tausek said Swimlane research found 68% of organizations leave critical flaws unpatched for over 24 hours and 55% don’t have a comprehensive system for prioritizing vulnerabilities. The delay is particularly risky for endpoint management systems, which run with elevated privileges and control thousands of devices. Successful exploitation could bypass security controls and allow attackers to push malware to managed endpoints, modify security configurations, or establish persistent backdoors across the enterprise. “The potential for a serious exploitation campaign should not be overlooked,” Tausek said. Pattern of exploitation That concern is not theoretical. EPM’s history makes rapid patching more urgent. CISA added three EPM vulnerabilities (CVE-2024-13159, CVE-2024-13160, and CVE-2024-13161) to its Known Exploited Vulnerabilities catalog in March after confirming active exploitation. The agency flagged another exploited EPM flaw (CVE-2024-29824) in October. The repeated targeting demonstrates EPM’s value to attackers seeking persistent network access and lateral movement capabilities. Once attackers compromise endpoint management infrastructure, they can spread across the enterprise rapidly. Deployment guidance The patch is available through the Ivanti License System and applies to EPM versions 2024 SU4 and earlier. Organizations running the 2022 branch should note that it reaches end of life in October 2025 and will no longer receive security updates after that date, the  Ivanti advisory added. Security teams should prioritize updating EPM instances to version 2024 SU4 SR1 immediately, particularly any installations accessible from untrusted networks. Organizations with internet-facing EPM instances face the highest risk and should patch within 24 hours. For organizations that can’t patch immediately, the advisory recommended ensuring EPM management interfaces aren’t exposed to the public internet and implementing strict network segmentation to isolate management servers from untrusted networks. Tausek also recommended training administrators to recognize social engineering attacks, since the critical XSS vulnerability requires viewing a poisoned dashboard page to trigger. “Since EPMs often run with high privileges, any misuse of it risks bypassing security controls and rapidly escalating the impact of a breach,” Tausek added.
Behind the breaches: Case studies that reveal adversary motives and modus operandi
In today’s threat landscape, it’s no longer enough to focus solely on malware signatures and IP addresses. Defenders must understand how adversaries think, organize and operate, because attacker intent and methodology are now just as critical as technical artifacts. Recent developments have provided rare visibility into the internal processes of modern threat groups, how they coordinate, communicate, exploit vulnerabilities and adapt their tooling in real time. This kind of behind-the-scenes insight is becoming indispensable as cyber threats grow more sophisticated, more specialized and more tightly aligned with financial or strategic objectives.  We’ve analyzed a series of recent real-world incidents to better understand evolving threat actor behavior. Let’s take a closer look at what these cases reveal. The BlackBasta chat leak BlackBasta is often viewed as a tightly run ransomware operation, but internal leaks tell a very different story. The BlackBasta chat leak exposes the group’s behind-the-scenes reality, revealing not a polished, corporate-style criminal enterprise but a fragmented ecosystem marked by hierarchy issues, operational stress, shifting loyalties and deep-seated mistrust among members. At the top of the structure sits Oleg (aka Tramp), acting as the de facto operations director. The chats depict him as the ultimate decision-maker on campaigns, revenue distribution and targeting rules, including strategic exclusions such as avoiding Russian financial institutions. His leadership, however, is portrayed as opaque and self-interested, with several members openly questioning whether their earnings and workloads reflect fair compensation. Bio functions as the operation’s central technical architect, managing everything from infrastructure stability to access orchestration. His background under the alias “Pumba” in the Conti collective reinforces the well-known pattern of talent migrating across ransomware-as-a-service ecosystems. Despite his skill set, the chats show Bio repeatedly expressing paranoia about state surveillance, especially following his release from detention, underscoring the constant psychological pressure faced by operators. Lara handles administrative tasks under heavy workload and stress, reportedly receiving less compensation than others despite being central to operations. The presence of actors like Cortes, with ties to Qakbot, demonstrates how ransomware crews frequently outsource expertise, rely on external access brokers or pull in operators with malware-specific experience as needed. This kind of crossover, visible only when internal dialogues spill out, shows how interconnected the cybercriminal ecosystem truly is. The chats further reveal operational inefficiencies that contradict the polished image these groups try to project. Members complain about slow decision-making, unclear leadership directives and disorganized workflows. Disputes over profit sharing, workload assignment and campaign prioritization point toward a group struggling to maintain cohesion. Even discussions around infrastructure updates, task delegation and encryption deployments show signs of technical debt and inconsistent coordination. Ultimately, the BlackBasta chat leak demystifies the myth of ransomware groups as disciplined, unified machines. Instead, it exposes a loose federation of operators bound together by profit but pulled apart by mistrust, emotional strain, resource imbalance and competing for personal agendas. For defenders, these insights offer not only a rare psychological snapshot of threat actor behavior but also a reminder that even the most feared cybercriminal groups are vulnerable to the same organizational weaknesses that plague legitimate enterprises. The dual life of EncryptHub What if the same threat actor breaching networks turned around and got a “Thank-you” note for reporting the flaws they once exploited? In a curious twist, Microsoft credited “EncryptHub“, a persona long tied to malware campaigns, credential theft and access brokering, for responsibly disclosing two Windows vulnerabilities in March 2025. Better known by aliases like SkorikARI and LARVA-208, this actor demonstrates a striking contradiction: simultaneously engaging in cybercrime while positioning themselves as a security researcher. When adversaries start submitting bug reports, the boundary between black-hat activity and legitimate vulnerability disclosure becomes increasingly blurred. Both vulnerabilities patched in Microsoft’s March Patch Tuesday were attributed to an individual with a documented history of malicious operations, including distributing malware through spoofed WinRAR websites and compromising hundreds of high-value targets across Europe and Asia. Unlike hierarchical ransomware groups, EncryptHub functions as a solo operator, shifting fluidly between freelance development, ad-hoc bug bounty submissions and illicit intrusion campaigns. Reports also indicate the use of ChatGPT to automate code generation, reconnaissance scripting and communication, reducing workload while enabling faster operational tempo. This case highlights a growing trend in the threat landscape: actors who no longer fit into fixed categories. Instead of being exclusively criminal or exclusively “researcher,” many now oscillate between both based on financial incentives, operational pressure and perceived risk. The acknowledgment from Microsoft underscores the uncomfortable reality that modern threat actors are increasingly hybrid strategic, opportunistic and adaptive. Understanding this duality is essential for evaluating their psychology, long-term intent and the evolving gray zone where legitimate security research and cybercrime increasingly intersect. BlackLock’s open recruitment tactics What happens when ransomware operators start posting job ads? BlackLock’s recent recruitment campaigns reveal an increasingly brazen and industrialized cybercrime ecosystem, one where threat actors no longer rely solely on stealth but openly solicit personnel to scale their operations. The group has been aggressively searching for “traffers,” a role dedicated to funneling compromised traffic and delivering ready-to-exploit victims. These recruitment efforts, found across Russian-language underground forums such as RAMP as well as gated Telegram channels, highlight a maturing supply-chain model in ransomware operations. This traffer-driven workflow is designed to offload the riskiest phase of the attack chain – initial access to external contractors. By outsourcing victim acquisition, BlackLock minimizes its operational exposure while ensuring a consistent inflow of compromised endpoints, credentials and exploitable network footholds. The model mirrors legitimate gig-economy structures but operates with criminal specialization, where traffers focus exclusively on harvesting access through phishing, malware loaders or traffic distribution systems, while the core BlackLock operators handle encryption, negotiation mechanics and monetization. This level of open recruitment signals growing confidence within the ransomware underground. It further reflects the shift toward modular cybercrime-as-a-service ecosystems, where roles are distributed, attack components are interchangeable and entry barriers for aspiring threat actors continue to fall. Understanding this recruitment strategy is crucial, as the traffer economy significantly accelerates ransomware proliferation and underscores how deeply commoditized initial access has become. Understanding, foresight, anticipation Through this analysis, we’ve explored not just isolated incidents, but the broader behavioral patterns, operational workflows and strategic decision-making that define modern threat actors. By understanding how these adversaries adapt, coordinate and exploit emerging opportunities, we gain the foresight needed to anticipate their next moves and continuously refine our defense strategies. As threat actor behaviors evolve, we’ll continue to publish deeper insights and actionable intelligence to help the cybersecurity community stay informed, resilient and one step ahead. This article is published as part of the Foundry Expert Contributor Network.Want to join?
Three PCIe Encryption Weaknesses Expose PCIe 5.0+ Systems to Faulty Data Handling
Three security vulnerabilities have been disclosed in the Peripheral Component Interconnect Express (PCIe) Integrity and Data Encryption (IDE) protocol specification that could expose a local attacker to serious risks. The flaws impact PCIe Base Specification Revision 5.0 and onwards in the protocol mechanism introduced by the IDE Engineering Change Notice (ECN), according to the PCI Special
Quantum meets AI: The next cybersecurity battleground
In recent years, artificial intelligence (AI) has been spreading its tentacles across the global technological landscape, as evidenced by the increase in autonomous and automated technologies and their deployment across industries and sectors. While the world is still recovering from the global impact of AI, quantum computing is gradually emerging. In quantum computing, the principles of quantum mechanics are used to perform calculations, enabling the solution of complex problems faster than classical computers. Some have described the recent AI boom and the soon-to-be fully emerged quantum as the ‘mind’ and ‘muscle’ respectively. If this analogy is true, one can only imagine the resultant effect when the ‘mind’ meets the ‘muscle’, and that is where we are heading. The collision of these two technologies promised to be the next major technological battleground capable of shaping computing, cybersecurity and even geopolitical power structures. With these two forces, not only will the way we compute be redefined in the 21st century, but also how power, privacy and innovation will be distributed. This is because, while AI algorithms are known to recognize patterns and learning from data that was fed into them, quantum computers are capable of exploring multiple paths simultaneously, making it easy to unlock a computing revolution. Quantum logic reshaping the internet At its core of invention, quantum computing goes beyond being just faster computers, but is uniquely designed to create an entirely different universe of processing. Instead of using bits (0s and 1s) as it’s applicable in AI, quantum computers use qubits, which can exist in multiple states simultaneously through the principles of superposition and entanglement. By implication, a well and thoroughly designed quantum system can solve problems in microseconds, compared to classical computers, which could take years to solve the same and similar problems. This could enable ultra-secure communication through what is known as quantum key distribution (QKD). Consequently, data interception would be made nearly impossible, which could reshape global connectivity, providing faster power and more secure digital infrastructures. When AI meets quantum power The concept of AI systems greatly depends on data input into the AI algorithm, which means the more data that is fed into the Algorithm, the better the output. Most AI systems are commonly faced with hardware limitations, and some of the largest AI systems, like ChatGPT and DeepMind’s AlphaFold, among others, are known to be confronted with these challenges. However, with quantum, these limitations are not present. This is because quantum machine learning (QML) is leveraged to perform tasks such as pattern recognition, optimisation and simulation. Additionally, with the concept of QML training, the need to train data in real time with massive data centres will no longer be necessary. In practical terms, the impact of quantum capacity and its application would be felt more than training billions of AI model parameters for minutes, as the expected output will now be achieved in microseconds. For instance, realizing global climate systems in real-time and simulating financial markets can now be achieved in real-time. The darker side of quantum-AI synergy As good as this collision is, a dark undercurrent could emerge, where these very brilliant technologies that promise to transform our way of doing things could also be weaponised by state actors or cybercriminals. The malicious attackers can utilize the combination of AI and quantum to actualize quantum-enhanced cyber threats. Researchers are also of the view of the possibilities that cybercriminals will be able to decrypt modern encryptions like elliptic curve cryptography (ECC), advanced encryption standard (AES) and Rivest-Shamir-Adleman (RSA), among others.  For instance, RSA and ECC are practical encryption methods used in financial institutions to protect online transactions. When these are compromised, they render the confidentiality of scrambled data exposed, hence allowing unauthorized access to such information. The day that this happened is already tagged ‘Q-Day’, that is, when quantum computers become powerful enough to break today’s encryption standards. Additionally, other security breaches like password cracking, forging digital certificates or even deepfaking (that is, impersonating AI systems) are among others that can be triggered by cybercriminals with the advent of quantum and AI.   Preparing for Q-Day: The cybersecurity paradox As most systems now employ cryptographic keys to provide confidentiality to their data, Q-Day, the moment when quantum computers become powerful enough to break existing encryption, would render some of these efforts useless. So, organization and government institutions alike are bracing ahead of Q-Day since the internet, government databases, corporate organizations’ databases and even financial systems could easily be cracked. For instance, while the UK’s National Cyber Security Centre (NCSC) has outlined a phased approach with a 2035 target set for complete migration across all systems, the US is mandating a similar transition for its National Security Systems by 2,030. These efforts are a proactive defence approach that focuses on developing quantum-resistant encryption models, ensuring the security of cryptographic keys and adaptive cybersecurity policies to withstand the coming quantum age. Trust: The new currency of innovation One of the challenges of quantum systems is that they operate based on probabilities, and not certainties. Similarly, the output of AI systems can easily be marred by inappropriate data quality, data bias, lack of explainability and transparency, adversarial threats, ethical concerns and governance concerns. The real battles that the developer of this innovation should concentrate on should not only revolve around their speed, efficiency or building the most potent combination of quantum and AI systems, but trust. Trust is essential as the ultimate expectation of any built system is to generate the expected outcome. Therefore, if AI represents intelligence, and quantum represents uncertainty, then ‘ How do we trust outcomes from systems we cannot fully explain?’ Therefore, trust would need to be built through cybersecurity frameworks and regulations to enhance security, transparency and governance. This would help in addressing the post-quantum cryptography, AI auditing, explainability and ethical oversight, which will form the foundation of resilient digital ecosystems. The road ahead Although AI and quantum computing will not replace human intelligence, their fingerprints will be seen all over the place in the not-too-distant future. While it is evident today that AI is already transforming the industries, the future holds that both AI and quantum will further provide innovations beyond what humans can imagine in various sectors such as energy, healthcare, finance and national defence, among others. However, the real question is whether society can adapt to the pace at which these technologies are moving before they control us. Looking beyond innovation It is now clear that the convergence of artificial intelligence and quantum computing not only provides technological breakthroughs but also cybersecurity challenges. Putting them together, computation, problem-solving and data analysis will be more effective and efficient at unimaginable scales. Despite positive capabilities, the tendency for them to be threatened and undermine the very foundations of digital trust and privacy that modern societies depend upon is very high. Therefore, as quantum computing is gradually coming up with Q-Day approaching, the urgency to prepare for a post-quantum world becomes increasingly clear. Consequently, organizations, governments and cybersecurity experts should now start looking beyond innovation and the technological advancement that these technologies deliver and start focusing on resilience. This would involve a massive investment in advancing ethical AI governance, developing regulatory frameworks and regulations, and building post-quantum cryptographic standards in existing systems to maintain security and public confidence. Therefore, the race should not only be focused on developing advanced technologies, but also on creating secure technologies that are not easily prone to attacks by cybercriminals. This article is published as part of the Foundry Expert Contributor Network.Want to join?
KI-Browser gefährden Unternehmen
srcset="https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?quality=50&strip=all 3840w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=300%2C168&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=768%2C432&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=1024%2C576&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=1536%2C864&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=2048%2C1152&quality=50&strip=all 2048w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=1240%2C697&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=150%2C84&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=854%2C480&quality=50&strip=all 854w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=640%2C360&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2678387745.jpg?resize=444%2C250&quality=50&strip=all 444w" width="1024" height="576" sizes="auto, (max-width: 1024px) 100vw, 1024px">Experten warnen vor der Nutzung von KI-Browsern in Unternehmen.Digineer Station – shutterstock.com Die Gartner-Analysten Dennis Xu, Evgeny Mirolyubov und John Watts empfehlen Unternehmen dringend, alle KI-Browser aufgrund der Cybersicherheitsrisiken auf absehbare Zeit zu blockieren. Sie stützten ihre Empfehlung auf bereits identifizierte Risiken „und andere potenzielle Risiken, die noch entdeckt werden müssen, da es sich um eine sehr junge Technologie handelt“. Die Warnung kommt zum richtigen Zeitpunkt, da KI-Browser bereits häufig eingesetzt werden: In 27,7 Prozent der Organisationen gibt es mindestens einen Nutzer, der Atlas installiert hat. In einigen Unternehmen wird der Browser von bis zu zehn Prozent der Mitarbeiter aktiv genutzt. Das hat eine Studie des Security-Anbieters Cyberhaven ergeben. Die höchsten Akzeptanzraten wurden in der Technologiebranche (67 Prozent), der Pharmaindustrie (50 Prozent) und der Finanzbranche (40 Prozent) festgestellt, also in allen Sektoren mit erhöhten Sicherheitsanforderungen. ChatGPT Atlas, das am 21. Oktober auf den Markt kam, wurde laut Cyberhaven 62-mal häufiger von Unternehmen heruntergeladen als Perplexity Comet, das am 9. Juli veröffentlicht wurde. Die Einführung von Atlas weckte auch insgesamt ein erneutes Interesse an KI-Browsern, wobei die Downloads von Comet in derselben Woche um das Sechsfache stiegen. Unmittelbar nach der Einführung von ChatGPT Atlas wurden jedoch Bedenken hinsichtlich der von KI-Browsern ausgehenden Gefahren laut. Experten verwiesen auf Schwachstellen, die Prompt Injection ermöglichen, und Probleme hinsichtlich der Datensicherheit. Sensible Daten in Gefahr Ein Grund für diese Besorgnis: Unternehmen verlieren die Kontrolle über ihre Daten, wenn diese aktive Webinhalte, den Browserverlauf und die Inhalte offener Tabs zur Analyse an die Cloud senden. In der Dokumentation von Perplexity wird beispielsweise gewarnt, dass „Comet einige lokale Daten unter Verwendung der Server von Perplexity verarbeiten kann, um Anfragen zu erfüllen. Das bedeutet, dass Comet den Kontext der angeforderten Seite (zum Beispiel Text und E-Mail) liest, um die angeforderte Aufgabe auszuführen.“ Gartner-Analyst Mirolyubov erklärt dazu: „Das eigentliche Problem ist, dass der Verlust sensibler Daten an KI-Dienste irreversibel und nicht nachvollziehbar sein kann. Unternehmen können verlorene Daten möglicherweise nie wiederherstellen.“ Es ist nicht nur die Frage, wohin die Browser Ihre Daten zur Verarbeitung senden. Sondern auch, was sie damit machen. „Fehlerhafte Transaktionen werfen im Falle kostspieliger Fehler Fragen zur Verantwortlichkeit auf“, so Mirolyubov. Herkömmliche Kontrollen reichen nicht aus KI-Browser können autonom Websites navigieren, Formulare ausfüllen und Transaktionen abschließen, während sie bei Webressourcen authentifiziert sind. Wie die Gartner-Analysten in ihrem Beitrag ausführen, macht dies die KI-Browser anfällig für neue Cybersicherheitsrisiken, „wie indirekte, durch Prompt Injection verursachte betrügerische Aktionen. Dazu zählt der Verlust und Missbrauch von Anmeldedaten, wenn der KI-Browser dazu verleitet wird, autonom zu einer Phishing-Website zu navigieren. „Herkömmliche Kontrollmechanismen sind für die neuen Risiken, die durch KI-Browser entstehen, unzureichend, und Lösungen befinden sich erst in den Anfängen“, mahnt Mirolyubov. „Es besteht eine große Lücke bei der Überprüfung multimodaler Kommunikation mit Browsern, einschließlich Sprachbefehlen an KI-Browser.“ Kurz nach der Einführung von ChatGPT Atlas räumte bereits Dane Stuckey, CISO von OpenAI, in einem Beitrag auf X ein: „Prompt Injection bleibt ein ungelöstes Sicherheitsproblem. Angreifer werden viel Zeit und Ressourcen aufwenden, um Wege zu finden, ChatGPT-Agenten für Attacken zu nutzen.“ Entdeckte Schwachstellen verdeutlichen Unreife Über die theoretischen Risiken hinaus sind in beiden großen KI-Browsern konkrete Sicherheitslücken aufgetreten. Tage nach dem Start von ChatGPT Atlas entdeckten Forscher, dass es OAuth-Token unverschlüsselt mit übermäßig freizügigen Dateieinstellungen unter macOS speichert, was möglicherweise unbefugten Zugriff auf Benutzerkonten ermöglicht. Die Schwachstelle wurde am 27. Oktober von der Sicherheitsforschungsgruppe Teamwin dokumentiert. OpenAI hatte bis zum 31. Oktober, als Gartner seine Untersuchung abschloss, noch keinen Patch veröffentlicht. Unabhängig davon berichtete das Cybersicherheitsunternehmen LayerX Security im August über die Entdeckung einer Schwachstelle in Comet namens „CometJacking“, die möglicherweise Benutzerdaten an von Angreifern kontrollierte Server weiterleiten könnte. Ein langer Weg bis zur Reife Die entdeckten Schwachstellen unterstreichen die allgemeinen Bedenken hinsichtlich der Reife der KI-Browser-Technologie. „Sicherheit und Datenschutz müssen zu zentralen Designprinzipien werden und dürfen nicht nur nachträglich berücksichtigt werden”, fordert Mirolyubov. Anbieter von KI-Browsern müssten von Anfang an Cybersicherheitskontrollen auf Unternehmensniveau integrieren und mehr Transparenz hinsichtlich Datenflüsse und agentenbasierten Entscheidungen bieten. Der Gartner-Analyst geht davon aus, dass die Entwicklung neuer Lösungen zur Kontrolle der KI-Nutzung wahrscheinlich „Jahre statt Monate” dauert. „Es ist unwahrscheinlich, dass alle Risiken beseitigt werden können – fehlerhafte Aktionen von KI-Agenten werden weiterhin ein Problem darstellen. Unternehmen mit geringer Risikotoleranz müssen KI-Browser möglicherweise auf längere Sicht blockieren.“ Gartner rät Unternehmen mit höherer Risikotoleranz, die experimentieren möchten, Pilotprojekte auf kleine Gruppen zu beschränken. Sie sollten sich mit risikoarmen Anwendungsfällen befassen, die leicht zu überprüfen und rückgängig zu machen sind. „Benutzer müssen stets genau beobachten, wie der KI-Browser bei der Interaktion mit Webressourcen autonom navigiert“. Zudem empfiehlt das Analystenhaus Unternehmen, vorerst die Installation von KI-Browsern mithilfe bestehender Netzwerk- und Endpunkt-Sicherheitskontrollen zu blockieren und ihre KI-Richtlinien zu überprüfen. Auf diese Weise soll sichergestellt werden, dass die breite Nutzung von KI-Browsern untersagt ist. „Derzeit entscheiden sich die meisten Cybersicherheitsteams dafür, KI-Browser zu blockieren und die Einführung zu verzögern, bis die Risiken besser verstanden werden und die Kontrollen ausgereifter sind“, so Mirolyubov. (jm)
Webinar: How Attackers Exploit Cloud Misconfigurations Across AWS, AI Models, and Kubernetes
Cloud security is changing. Attackers are no longer just breaking down the door; they are finding unlocked windows in your configurations, your identities, and your code. Standard security tools often miss these threats because they look like normal activity. To stop them, you need to see exactly how these attacks happen in the real world. Next week, the Cortex Cloud team at Palo Alto Networks
Warning: WinRAR Vulnerability CVE-2025-6218 Under Active Attack by Multiple Threat Groups
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Tuesday added a security flaw impacting the WinRAR file archiver and compression utility to its Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation. The vulnerability, tracked as CVE-2025-6218 (CVSS score: 7.8), is a path traversal bug that could enable code execution. However, for exploitation
01flip: Multi-Platform Ransomware Written in Rust
01flip is a new ransomware family fully written in Rust. Activity linked to 01flip points to alleged dark web data leaks. The post 01flip: Multi-Platform Ransomware Written in Rust appeared first on Unit 42.
Microsoft Issues Security Fixes for 56 Flaws, Including Active Exploit and Two Zero-Days
Microsoft closed out 2025 with patches for 56 security flaws in various products across the Windows platform, including one vulnerability that has been actively exploited in the wild. Of the 56 flaws, three are rated Critical, and 53 are rated Important in severity. Two other defects are listed as publicly known at the time of the release. These include 29 privilege escalation, 18 remote code
Polymorphic AI malware exists — but it’s not what you think
We are either at the dawn of AI-driven malware that rewrites itself on the fly, or we are seeing vendors and threat actors exaggerate its capabilities. Recent Google and MIT Sloan reports reignited claims of autonomous attacks and polymorphic AI malware capable of evading defenders at machine speed. Headlines spread rapidly across security feeds, trade publications, and underground forums as vendors promoted AI-enhanced defenses. Beneath the noise, the reality is far less dramatic. Yes, attackers are experimenting with LLMs. Yes, AI can aid malware development or produce superficial polymorphism. And yes, CISOs should pay attention. But the narrative that AI automatically produces sophisticated malware or fundamentally breaks defenses is misleading. The gap between AI’s theoretical potential and its practical utility remains large. For security leaders, the key is understanding realistic threats today, exaggerated vendor claims, and the near-future risks that deserve planning. What even is polymorphic malware? Polymorphic malware refers to malicious software that changes its code structure automatically while keeping the same core functionality. Its purpose is to evade signature-based detection by ensuring no two samples are identical at the binary level. The concept is by no means new. Before AI, attackers used encryption, packing, junk code insertion, instruction reordering, and mutation engines to generate millions of variants from a single malware family. Modern endpoint platforms rely more on behavioral analysis than static signatures. In practice, most so-called AI-driven polymorphism amounts to swapping a deterministic mutation engine for a probabilistic one powered by a large language model. In theory, this could introduce more variability. Realistically, though, it offers no clear advantage over existing techniques. Marcus Hutchins, malware analyst and threat intelligence researcher, calls AI polymorphic malware “a really fun novelty research project,” but not something that offers attackers a decisive advantage. He notes that non-AI techniques are predictable, cheap, and reliable, whereas AI-based approaches require local models or third-party API access and can introduce operational risk. Hutchins also pointed to examples like Google’s “Thinking Robot” malware snippet, which queried the Gemini AI engine to generate code to evade antivirus. In reality, the snippet merely prompted AI to produce a small code fragment with no defined function or guarantee of working in an actual malware chain. “It doesn’t specify what the code block should do, or how it’s going to evade an antivirus. It’s just working under the assumption that Gemini just instinctively knows how to evade antiviruses (it doesn’t). There’s also no entropy to ensure the ‘self-modifying’ code differs from previous versions, or any guardrails to ensure it actually works. The function was also commented out and not even in use,” Hutchens wrote in a post deleted from LinkedIn. As the researcher observes, evasion alone is strategically meaningless unless it can reliably support a functioning malicious capability. Mature threat actors value reliability over novelty, and traditional polymorphism already meets that need. What real advances is AI providing for attackers? AI’s true impact today isn’t autonomous malware, but speed, scale, and accessibility when it comes to generating malicious payloads. Think of large language models serving as development assistants: debugging code, translating samples between languages, rewriting and optimizing scripts, and generating boilerplate loaders or stagers. This lowers technical barriers for less experienced actors and shortens iteration cycles for skilled ones. Social engineering has also improved. Phishing campaigns are cleaner, more convincing, and highly scalable. AI rapidly generates region-specific lures, industry-appropriate pretexts, and polished messages, removing the grammatical red flags that defenders once relied on. Business email compromise attacks that already depend on deception rather than technical sophistication particularly benefit from this shift. Generative AI tools can produce superficial variations in malware code by renaming variables or slightly rearranging structures. This occasionally bypasses basic static scanning, but rarely defeats modern behavioral detection, and often introduces instability that is unacceptable for well-resourced criminal operations. For established threat actor groups that require uptime and dependable performance, this unpredictability becomes a disadvantage. The net effect isn’t improved sophistication, but a rise in accessibility: more actors, even inexperienced ones, can now produce “good enough” malware. Earlier this year, a crude ransomware strain appeared in the Visual Studio marketplace as a test extension. John Tuckner of Secure Annex dubbed it “AI slop” ransomware that was poorly written, unstable, and operationally unadvanced. The sample highlighted how easily AI-assisted code can be bundled and distributed, not its ingenuity. “Ransomware has appeared in the VS Marketplace and makes me worry,” Tuckner posted on X. “Clearly created through AI, it makes many mistakes like including decryption tools in extension. If this makes it into the marketplace through [sic], what impact would anything more sophisticated cause?” Inflated AI claims draw industry pushback The gap between marketing-driven AI narratives and practitioner skepticism is clear. A recent Anthropic report claimed a “highly sophisticated AI-led espionage campaign” targeting technology companies and government agencies. While some viewed this as proof that generative AI is embedded in nation-state cyber operations, experts were skeptical. Veteran security researcher Kevin Beaumont criticized the report for lacking operational substance and providing no new indicators of compromise. BBC cyber correspondent Joe Tidy noted that activity likely reflected familiar campaigns, not a new AI-driven threat. Another researcher, Daniel Card emphasized that AI accelerates workflows but does not think, reason, or innovate autonomously. Across these discussions, one pattern remains consistent: AI hype collapses under technical scrutiny. Why AI polymorphic malware hasn’t taken over If AI can accelerate development and generate endless variations of code, why has genuinely effective AI polymorphic malware not become commonplace? The reasons are practical rather than philosophical. Traditional polymorphism works well: Commodity packers and crypters generate huge variant volumes cheaply and predictably. Operators see little benefit in switching to probabilistic AI generation that may break functionality. Behavioral detection reduces benefits: Even if binaries differ, malware must still perform malicious actions (e.g. C2 communication, privilege escalation, credential theft, and lateral movement) which produce telemetry independent of code structure. Modern EDR, NDR, and XDR platforms detect this behavior reliably. AI reliability issues: Large language models hallucinate, misuse libraries, or implement cryptography incorrectly. Code may appear plausible but fail under real-world conditions. As stated earlier, for criminal groups, instability is a serious operational risk. Infrastructure exposure: Local models can leave forensic traces and third-party APIs risk abuse detection and logging. These risks further deter disciplined threat actors. Most successful adversaries may still use AI for support tasks like research, phishing, translation, automation but not completely trust it with generating core payloads for their offensive operations. What CISOs and defenders should watch out for The real danger isn’t underestimating AI but misunderstanding its risk. Autonomous self-rewriting malware isn’t the immediate threat. Instead, attackers operate faster and at greater scale: Automation and propagation. Recurrent malware campaigns like Shai-Hulud illustrate how attackers can use automation to dramatically increase efficiency, blast radius and the extent of disruption, without introducing novel technical logic. (This recurring campaign used automation, not necessarily AI). In later iterations, automated propagation spread the malware rapidly across environments and downstream dependencies, even though the payloads remained identical. This meant defenders could still rely on stable indicators such as hashes, static exfiltration URLs, and YARA rules, but they had far less time to react before impact cascaded across registries, build systems, and developer environments. The risk shift was not smarter malware, but faster, wider execution at machine speed. Rapid variant iterations. Building on the previous point, AI can shorten the time between concept and deployment. Malware families can cycle during a single incident, increasing the value of behavioral detection, memory analysis, and retroactive hunting. Social engineering at scale. AI-generated phishing, pretexting, and tailored messages improve quality and reach. Identity infrastructure (credentials, MFA, access workflows) remains a key attack surface. Defenders should focus on email security, user behavior analytics, and authentication resilience. Volume and noise. More actors can produce “good enough” malware, raising the number of low-quality but operationally usable threats. Automation and prioritization in SOC operations are becoming even more essential to prevent response teams from being overwhelmed with noise and burnout. Vendor skepticism. Marketing claims of AI-specific protection don’t guarantee superior detection. CISOs should demand transparent testing, real-world datasets, validated false-positive rates, and proof that protections promised by “novel” products extend beyond lab conditions. AI is reshaping cybercrime, but not in the cinematic way some vendors suggest. Its impact lies in speed, scale, and accessibility rather than self-modifying malware that breaks existing defenses. Mature threat actors still rely on proven techniques. Polymorphism isn’t new, behavioral detection remains effective, and identity remains the primary entry point for attackers. Today’s “AI malware” is better understood as AI-assisted development rather than autonomous innovation. For CISOs, the key takeaway is a compression of time and effort for attackers. The advantage shifts to those who can automate, iterate faster, and maintain visibility and control. Preparing for this reality means doubling down on behavioral monitoring, identity security, and response automation. Right now, speculative self-aware malware is less of a risk than the real-world efficiency gains AI provides to attackers: faster campaign tempo, greater scale, and a lower barrier to entry for capable abuse. The hype is louder, but the operational impact of that acceleration is where leadership judgment now matters most.
Key cybersecurity takeaways from the 2026 NDAA
On Dec. 7, the House and Senate Homeland Security Committees released their compromise version of the 2026 National Defense and Authorization Act (NDAA), a nearly 3,100-page piece of legislation that contains a host of provisions to fund several Department of Defense cybersecurity efforts in fiscal year 2026. Although cybersecurity is referenced hundreds of times across the NDAA, the legislation contains provisions that, once the law becomes effective, will mark significant shifts in how the US military manages major cybersecurity tasks, particularly in the timely arena of protecting mobile communications of top brass and AI deployments, as well as more understated, but potentially high-impact, infosec duties. Although numbers chronically vary widely for NDAA cyber expenses, depending on the source or the year, according to a July budget request from the CFO for the Defense Department, the cyber activities in the NDAA request for FY2026 are approximately $15.1 billion, or 4.1% more than the previous year’s request. This cyber budget bump stands in stark contrast to proposed double-digit cuts for civilian agencies. Around $9.1 billion of that amount goes to pure cybersecurity efforts, with the rest allocated to not clearly defined “cyberspace operations” of US Cyber Command, the Defense Intelligence Agency, the Defense Threat Reduction Agency, the National Security Agency, and the Office of the Under Secretary of Defense, Research and Engineering. Around $611.9 million of the total was allocated to DoD cyber research for the “deployment and modernization of existing capabilities and technologies that advance next generation cybersecurity and cyberspace operations programs.” Securing mobile phones for top officials Few cyber risks are as operationally consequential as insecure mobile communications, and the NDAA directly targets this gap with new mandates for how the Pentagon procures and protects devices for top officials. The bill requires that, no later than 90 days after enactment, the DoD will ensure that each wireless mobile phone and all related telecommunications the department provides to senior military officials or any other employee who performs sensitive national security functions are acquired under contracts or other agreements that require enhanced cybersecurity protections. Under the bill, enhanced cybersecurity protections mean encrypted data, capabilities to mitigate or obfuscate persistent device identifiers, including periodic rotation of network or hardware identifiers to reduce the risk of inappropriate tracking of the activity or location of the wireless mobile phones, and the capability to monitor the wireless mobile phones continuously. Under the legislation, 180 days after the bill’s enactment, the Secretary of Defense must submit to the relevant congressional defense committees a report detailing the mobile telecommunications contracts the Pentagon has entered pursuant to these provisions, how it determined which employees these mobile provisions apply to, and the total costs of wireless mobile phones and telecommunication services involved. It is likely no coincidence that these provisions follow the so-called Signalgate incidents from earlier this year. During those incidents, the current DoD head Pete Hegseth shared over Signal via his private mobile device “nonpublic” information that identified “the quantity and strike times of manned US aircraft over hostile territory over an unapproved, unsecure network approximately two to four hours before the execution of those strikes,” according to a report released on Dec. 2 by the department’s inspector general. AI and machine learning security and procurement requirements Recognizing that AI now underpins everything from battlefield planning to intelligence analysis, the bill introduces sweeping requirements to safeguard these systems from emerging digital threats. The NDAA spells out a spate of policy and procurement practices that the military should meet regarding artificial intelligence and machine learning (ML). First, the DoD, in consultation with other Federal agencies, has 180 days after the date of enactment to develop and implement a department-wide policy for the cybersecurity and associated governance of AI and ML systems and applications, as well as the models for AI and ML used in national defense applications. The policy must protect against security threats to AI and machine learning, including model serialization attacks, model tampering, data leakage, adversarial prompt injection, model extraction, model jailbreaks, and supply chain attacks. It also must employ cybersecurity measures throughout the life cycle of systems using artificial intelligence or machine learning. Moreover, the policy must reflect the adoption of industry-recognized frameworks to guide the development and implementation of AI and ML security best practices. Likewise, it must follow standards for governance, testing, auditing, and monitoring of systems using artificial intelligence and machine learning to ensure the integrity and resilience of such systems against corruption and unauthorized manipulation. Finally, the AI and machine learning policy must accommodate training requirements for the department’s workforce to ensure personnel are prepared to identify and mitigate vulnerabilities specific to AI and ML. The bill further spells out physical and cybersecurity procurement requirements for AI and machine learning systems. It specifies that the defense secretary must develop a framework for the implementation of cybersecurity and physical security standards and best practices relating to AI and ML technologies to mitigate risks to the department from the use of such technologies. The NDAA specifies that the framework must cover all relevant aspects of the security of AI and ML systems, including the risk posed to and by the DoD workforce, including insider threat risks, training and workforce development requirements regarding artificial intelligence security awareness, artificial intelligence-specific threats and vulnerabilities, professional development and education, supply chain threats (including counterfeits), tampering risks, unintended exposure or theft of AI systems or data, security management practices and more. It also requires the framework to draw on existing frameworks, including the NIST Special Publication 800 series and existing DoD frameworks, including the Cybersecurity Maturity Model Certification framework. Finally, under the legislation, the framework must prioritize the most highly capable AI systems that may be of highest interest to cyber threat actors, based on risk assessments and threat reporting, and impose requirements for security on contractors. Other AI provisions under the NDAA require the DoD to revise the mandatory training on cybersecurity for members of the Armed Forces and civilian employees of the department to include content related to the unique cybersecurity challenges posed by artificial intelligence. The bill further says that by April 1, 2026, the DoD needs to establish a task force on AI sandbox environments to identify, coordinate, and advance department-wide efforts to develop and deploy AI sandbox environments necessary to support experimentation, training, familiarization, and development across the military. Other noteworthy cyber-related NDAA provisions Beyond mobile security and AI governance, the NDAA includes a broad array of cyber measures with strategic implications across defense, intelligence, and international partnerships. The following are among the more noteworthy cybersecurity provisions in the compromise bill: Commercial spyware: The bill contains a “sense of Congress” statement that there is a national security need for the legitimate and responsible procurement and application of cyber intrusion capabilities, including efforts related to counterterrorism, counternarcotics, and countertrafficking. It expresses the view that the proliferation of commercial spyware presents significant and growing risks to national security, including to the safety and security of government personnel. It suggests that the US should oppose the misuse of commercial spyware “to target individuals, including journalists, defenders of internationally recognized human rights, and members of civil society groups, members of ethnic or religious minority groups, and others for exercising their internationally recognized human rights and fundamental freedoms, or the family members of these targeted individuals.” It also further stipulates that the US should coordinate with allies and partners to prevent the export of commercial spyware tools to end-users likely to use them for malicious activities, and to share information on this issue with allies robustly. Evaluation of national security risks posed by foreign adversary acquisition of American multiomic data: The bill stipulates that not later than 270 days after its enactment, the director of national intelligence, in consultation with the secretary of defense, the US attorney general the secretary of health and humans services, the secretary of commerce, the secretary of homeland security, the secretary of state, and the national cyber director, shall complete an assessment of risks to national security posed by human multiomic data from US citizens that is collected or stored by a foreign adversary from the provision of biotechnology equipment or services. Multiomic data combines different types of biological data, such as genomics, transcriptomics, proteomics, and metabolomics, to provide a complete picture of a biological system. Biological data for artificial intelligence: The legislation calls for tiered levels of cybersecurity safeguards and access controls for the storage of biological data and contains requirements for the protection of the privacy of individuals. Cybersecurity regulatory harmonization: By June 1, 2026, the DoD must harmonize the cybersecurity requirements applicable to the defense industrial base, reduce the number of such requirements that are unique to a specific contract or other agreement, and submit to the congressional defense committees a report on the actions taken to carry out the harmonization. Cybersecurity and resilience annex in Strategic Rail Corridor Network assessments: The legislation says the defense secretary, in coordination with the transportation secretary and the homeland security secretary, should conduct a periodic evaluation of the Strategic Rail Corridor Network. The assessment must include an annex containing a review of the cybersecurity and the resilience of the physical infrastructure of the Strategic Rail Corridor. The Strategic Rail Corridor is the interconnected network of rail corridors important to national defense and military mobility, as defined by the Department of Defense and the Federal Railroad Administration. Cyber workforce recruitment and retention: The billrequires the defense secretary to fix the rates of basic pay for military employees working on cyber with a pay rate on par with comparable employees elsewhere in the government. Supporting cybersecurity and cyber resilience in the Western Balkans: The NDAA contains a “sense of Congress” statement that the United States support for cybersecurity, cyber resilience, and secure ICT infrastructure in Western Balkans countries will strengthen the region’s ability to defend itself from and respond to malicious cyber activity conducted by nonstate and foreign actors, including foreign governments, that seek to influence the region. Demonstration of real-time monitoring capabilities to enhance weapon system platforms: If funds are available, the secretary of defense, in coordinationwith the undersecretary of defense for acquisition andsustainment and the service acquisition executives, will carry out a demonstration to equip selected weapon systemplatforms with onboard, near real-time, end-to-end serialbus and radio frequency monitoring capabilities to detectcyber threats and improve maintenance efficiency.
Fortinet, Ivanti, and SAP Issue Urgent Patches for Authentication and Code Execution Flaws
Fortinet, Ivanti, and SAP have moved to address critical security flaws in their products that, if successfully exploited, could result in an authentication bypass and code execution. The Fortinet vulnerabilities affect FortiOS, FortiWeb, FortiProxy, and FortiSwitchManager and relate to a case of improper verification of a cryptographic signature. They are tracked as CVE-2025-59718 and
Tools, um MCP-Server abzusichern
srcset="https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?quality=50&strip=all 7200w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=300%2C168&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=768%2C432&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=1024%2C576&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=1536%2C864&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=2048%2C1152&quality=50&strip=all 2048w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=1240%2C697&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=150%2C84&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=854%2C480&quality=50&strip=all 854w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=640%2C360&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2025/11/Gorodenkoff_shutterstock_2324952347_16z9.jpg?resize=444%2C250&quality=50&strip=all 444w" width="1024" height="576" sizes="auto, (max-width: 1024px) 100vw, 1024px">Unabhängig davon, welche MCP-Server Unternehmen wofür einsetzen – “Unsicherheiten” sollten dabei außenvorbleiben.Gorodenkoff | shutterstock.com Model Context Protocol (MCP) verbindet KI-Agenten mit Datenquellen und erfreut sich im Unternehmensumfeld wachsender Beliebtheit. Allerdings ist auch MCP nicht frei von Sicherheitslücken, wie entsprechende Entdeckungen, etwa beim SaaS-Anbieter Asana oder dem IT-Riesen Atlassian gezeigt haben. Inzwischen hat sich jedoch einiges in Sachen MCP-Sicherheit getan. Einerseits wurden mit Blick auf das Kernprotokoll etliche Fortschritte erzielt. Beispielsweise in Form von Support für OAuth sowie für Authentifizierungs-Server von Drittanbietern und Identity-Management-Systeme. Darüber hinaus wurde inzwischen auch eine offizielle MCP Registry geschaffen, die einen Überblick über sichere, öffentlich verfügbare MCP-Server bietet. Dennoch bestehen weiterhin Sicherheitslücken, die sich für diverse Cyberschandtaten ausnutzen lassen – Prompt Injection, Tool Poisoning, Token-Diebstahl, Server-übergreifende Attacken oder manipulierte Messages sind nur einige von vielen Beispielen. Mit anderen Worten: Unternehmen, die sich beim Aufbau von Agentic-AI-Systemen einen Wettbewerbsvorteil verschaffen wollen, müssen erhebliche Anstrengungen unternehmen, um zu gewährleisten, dass sensible Daten nicht nach außen dringen. Glücklicherweise gibt es diverse Tools, die dabei Unterstützung versprechen. In diesem Artikel lesen Sie: was Security-Tools für MCP leisten sollten, und welche Angebote in diesem Bereich interessant sind. Das sollten MCP-Sicherheitslösungen können Die Gefahr von Datenlecks, Prompt Injections und weiteren Sicherheitsbedrohungen besteht unabhängig davon, ob Unternehmen: ihre eigenen KI-Agenten mit MCP-Servern von Drittanbietern, ihre eigenen MCP-Server mit Drittanbieter-Agenten, oder ihre eigenen Server mit den eigenen Agenten verbinden. Soll heißen: Unternehmen müssen in jedem Fall Autorisierungen und Berechtigungen überprüfen, detaillierte Zugriffskontrollen implementieren und alles protokollieren. Daraus ergeben sich auch die Anforderungen für MCP-Sicherheitslösungen. Diese sollten bieten: MCP-Servererkennung. Für Mitarbeiter eines Unternehmens ist es einfach, MCP-Server herunterzuladen und zu nutzen. Mit Scan-Services für MCP-Server können Unternehmen sämtliche Instanzen von Schatten-MCP-Servern in ihrer Umgebung finden. Laufzeitschutz. KI-Agenten kommunizieren mit MCP-Servern in natürlicher Sprache. MCP-Sicherheits-Tools sollten deshalb in der Lage sein, diese Kommunikation auf Sicherheitsprobleme wie Prompt Injections hin zu überwachen. Authentifizierungs- und Zugriffskontrollen. Das MCP-Protokoll unterstützt inzwischen OAuth, aber das ist nur ein erster Schritt. Für zusätzliche Sicherheit empfehlen sich Tools mit integrierten Kontroll-Frameworks für Zero Trust und Least Privilege. Logging und Observability. Tools und Plattformen sollten zudem die Möglichkeit bieten, MCP-Protokolle zu sammeln, Sicherheitsteams über Richtlinienverstöße zu informieren, Compliance-Daten zu erfassen oder Protokolle in die bestehende Sicherheitsinfrastruktur einzuspeisen. MCP-Security-Angebote Im Folgenden haben wir die Anbieter von MCP-Security-Tools in drei Kategorien aufgeteilt. Diese Aufstellung erhebt keinen Anspruch auf Vollständigkeit. Hyperscaler Für Unternehmen, die sich vollständig auf eine bestimmte Cloud-Plattform verlassen, bieten die MCP-Tools des jeweiligen Hyperscalers einen einfachen Einstieg. Amazon Web Services (AWS) hat Mitte 2025 seine eigene agentenbasierte KI-Plattform eingeführt. Amazon Bedrock AgentCore umfasst ein Gateway, das mehrere Protokolle unterstützt (darunter auch MCP), ein Identity-Management-System sowie Observability. Microsoft bietet einen grundlegenden Azure-MCP-Server an, inklusive Support für Azure Key Vault. Darüber hinaus unterstützen auch Azure AI Foundry Agent Service und Azure API Management das Model Context Protocol. Zudem bietet Microsoft mit dem Agent Framework auch ein Open-Source-Entwicklungskit, das sowohl MCP als auch Agent2Agent unterstützt und beispielsweise Schutz vor Prompt Injections verspricht. Google Cloud kündigte Anfang 2025 seine MCP Toolbox für Datenbanken an – inklusive integrierter Authentifizierung und Observability. Außerdem hat der Hyperscaler auch eine Referenzarchitektur veröffentlicht, um MCP-Server auf seiner Cloud-Plattform abzusichern. Große Plattformanbieter Der IT-Dienstleister Cloudflare hat mit MCP Server Portals ein Tool veröffentlicht, mit dem Unternehmen MCP-Verbindungen zentralisiert absichern und überwachen können. Die Funktion ist Bestandteil der Cloudflare-One-Plattform. Palo Alto Networks hat mit Blick auf MCP-Sicherheit mehrere Eisen im Feuer. Mit Prisma AIRS hat das Unternehmen einen eigenen, intermediären MCP-Server veröffentlicht. Dieser sitzt zwischen den KI-Agenten und dem eigentlichen MCP-Server und erkennt schadhafte Inhalte und Daten. Das Tool MCP Security ist hingegen Bestandteil von Cortex Cloud WAAS und überprüft die MCP-Kommunikation an der Netzwerkgrenze auf bösartige Aktivitäten. SentinelOne gewährt mit seiner Singularity Platform ebenfalls Einblick in die MCP-Interaktionskette und bietet zum Beispiel Warnmeldungen und automatisierte Incident Response für MCP-Server auf lokaler oder Remote-Ebene. Daneben hat auch Broadcom MCP-Sicherheitsfunktionen für VMware Cloud Foundation angekündigt, die künftig mehr Sicherheit für agentenbasierte Workflows gewährleisten sollen. Startups Die Plattform von Acuvity verspricht, MCP-Server umfassend abzusichern. Dafür sorgt laut dem Anbieter eine Kombination aus Least-Privilege-Execution, unveränderlichen Laufzeiten, kontinuierlichen Schwachstellenscans, Authentifizierung und Bedrohungserkennung. Das API-Security-Startup Akto hat eine MCP-Security-Plattform im Angebot. Sie umfasst ein Discovery Tool, um MCP-Server in Unternehmensumgebungen zu identifizieren, Security-Testing-Werkzeuge sowie Monitoring- und Threat-Detection-Funktionen. Invariant Labs bietet mit MCP-Scan ein quelloffenes Tool, das die statische Analyse und Echtzeitüberwachung von MCP-Servern ermöglicht. Mit Guardrails hat das Startup auch ein kommerzielles Produkt im Angebot. Dabei handelt es sich um einen Proxy. Der zwischen KI-Agenten und MCP-Servern sitzt und vor Security-Risiken schützen soll. Das Tool befähigt Anwender außerdem dazu, Richtlinien aufzusetzen. Die AI Security Fabric-Plattform von Javelin addressiert ebenfalls das Thema MCP-Sicherheit. Etwa mit Funktionen wie MCP-Server auf Risiken zu scannen oder Datenanfragen zu überprüfen.   Lasso Security stellt ein Open-Source-MCP-Gateway zur Verfügung, das die Konfiguration und das Lebenszyklusmanagement von MCP-Servern ermöglicht und Messages um sensible Informationen bereinigt. (fm) Sie wollen weitere interessante Beiträge rund um das Thema IT-Sicherheit lesen? Unser kostenloser Newsletter liefert Ihnen alles, was Sicherheitsentscheider und -experten wissen sollten, direkt in Ihre Inbox.
Personal Branding geht auch ohne Agentur
Das Experten-Netzwerk rückt Ihr Fachwissen in den Fokus – optimal präsentiert auf unseren B2B-Plattformen.People Images | shutterstock.com Was gut ist, kommt bekanntlich wieder. So auch das Experten-Netzwerk von CSO Deutschland, Computerwoche und CIO.de. Selbst wenn Sie davon noch nie zuvor etwas gehört haben: Vertrauen Sie uns, dieses Comeback ist eine gute Sache! Personal Brand als Experte ausbauen Denn das deutschsprachige Experten-Netzwerk von Foundry ermöglicht Ihnen als IT- oder Business-Entscheider, Fachexperte oder auch Wissenschaftler ab sofort, sich mit eigenen Fach- oder auch Meinungsbeiträgen (mehr) Sichtbarkeit im B2B-Umfeld zu verschaffen. Und diese ist (potenziell) nicht nur auf den deutschen Sprachraum beschränkt.    Egal ob Sie Ihrem VMware- oder SAP-Ärger Luft verschaffen, eine eigene Perspektive auf die europäischen Bestrebungen zur digitalen Unabhängigkeit werfen, oder die besten Management- und Security-Ansätze für Multi-Agenten-Teams mit ihren Peers teilen möchten – als Mitglied des Experten-Netzwerks stehen Ihnen unsere B2B-Plattformen zu diesem Zweck offen (nach vorheriger Themenabstimmung 😊). Und damit nicht genug: Experten bieten sich etliche weitere Optionen, um ihre Personal Brand mit Hilfe unserer Markenwelt auf verschiedenen Ebenen zu stärken. Interessiert? Dann bewerben Sie sich jetzt direkt für das Experten-Netzwerk von CSO Deutschland, Computerwoche und CIO.de. Alle weiteren Infos finden Sie hier.  
GitHub Action Secrets aren’t secret anymore: exposed PATs now a direct path into cloud environments
Many enterprises use GitHub Action Secrets to store and protect sensitive information such as credentials, API keys, and tokens used in CI/CD workflows. These private repositories are widely assumed to be safe and locked down. But attackers are now exploiting that blind trust, according to new research from the Wiz Customer Incident Response Team. They found that threat actors are using exposed GitHub Personal Access Tokens (PATs) to access GitHub Action Secrets and sneak into cloud environments, then run amok. “The root cause issue is the presence of these secrets in repos,” said David Shipley of Beauceron Security. “Cloud service provider access keys are gold, they can be extraordinarily long lived, and that’s what [attackers are] sniffing around for.” GitHub Action Secrets aren’t secrets anymore Wiz estimates that 73% of organizations using private GitHub Action Secrets repositories store cloud service provider (CSP) credentials within them. When PATs, which allow developers and automation bots to interact with GitHub repositories and workflows, are exploited, attackers can easily move laterally to CSP control planes. PATs can become a “powerful springboard” that allows attackers to impersonate developers and carry out a range of activities, explained Erik Avakian, technical counselor at Info-Tech Research Group. It’s like having a backstage pass into a company’s cloud environments, he said. “Once they’re holding that valid PAT, they can do all sorts of things in GitHub that lead directly back into a company’s AWS, Azure, GCP, or other types of cloud services, because GitHub treats that PAT like the real developer,” he said. With that access, threat actors can “poke around” various repositories and workflows and look for anything that hints at cloud access, configuration items, scripts, and hidden secrets, he noted. If they get access to real cloud credentials, they “have the keys to the company’s AWS bucket, Azure subscriptions, and other workflows.” They can then spin up cloud resources, access databases, steal source code, install malicious files such as crypto miners, sneak in malicious workflows, or even pivot to other cloud services, while setting up persistence mechanisms so they can return whenever they want. “At that point, basically anything you can do in the cloud, so can they,” said Avakian. Easily evading detection Wiz found that a threat actor with basic read permissions via a PAT can use GitHub’s API code search to discover secret names embedded directly in a workflow’s yaml code, accessed via “${{ secrets.SECRET_NAME }}.” The danger is that this secret discovery method is difficult to monitor because search API calls are not logged. Further, GitHub-hosted Actions run from GitHub-managed resources that use legitimate, shared IP addresses not flagged as malicious. Attackers can abuse secrets, impersonate workflow origins to exploit trust, and potentially access other resources if code is misconfigured or reused elsewhere in the workflows. They can also persistently access the system. In addition, if the exploited PAT has write permissions, attackers can execute malicious code and remove workflow logs and runs, pull requests, and ‘created branches’ (isolated copies of codebases for dev experimentation). Because workflow logs are rarely streamed into security incident and event management (SIEM) platforms, attackers can easily evade detection. Also, notably, a developer’s PAT with access to a GitHub organization makes private repositories vulnerable; Wiz research found that 45% of organizations have plain-text cloud keys stored privately, while only 8% are in public repositories. Shipley noted: “In some developers’ minds, a private repo equals safe, but it’s clearly not safe.” How enterprise leaders can respond To protect themselves against these threats, enterprises should treat PATs as they would any other privileged credentials, Avakian noted. Cloud infrastructure and cloud development environments should be properly locked down, essentially “zero trustifying” them through micro segmentation and privileged user management to contain them and prevent lateral pivoting. “Like any other credentials, tokens are best secured when they have reasonable expiration dates,” said Avakian. “Making tokens expire, rotating them, and using short-lived credentials will help thwart these types of risks.” Least privilege everything and give accounts only the rights they need, rather than an ‘admin everything’ approach, Avakian advised. More importantly, move cloud secrets out of GitHub workflows and ensure that the proper amount of monitoring and log review processes are in place to flag surprise or unexpected workflow or cloud creation events. Beauceron’s Shipley agreed, saying that enterprises need a multi-pronged strategy, good monitoring, instant response plans, and developer training processes that are reinforced with “meaningful consequences” for non-compliance. Developers must be motivated to follow secure coding best practices; building a strong security culture in developer teams is huge. “You can’t buy a blinky box for that part of the problem,” he said. “Criminals have stepped up their game,” said Shipley. “Organizations don’t have a choice. They have to invest in these areas, or they will pay.” Also, stop blindly trusting GitHub repos, he added. “The nature of repos is that they live forever. If you don’t know if you have cloud secrets inside your repos, you need to go and find them. If they’re there, you need to change them yesterday, and you need to stop adding new ones.” If there is an upside, he noted, it’s that enterprises are “victims of their own success” as they’ve raised the bar with multi-factor authentication (MFA). Gains in general security awareness makes it more difficult for criminals to obtain access and identities and compromise systems. “In some ways, this is a good sign,” said Shipley. “In a hilarious kind of way, it means [the criminals] are now moving into deeper levels requiring more effort.” This article originally appeared on InfoWorld.
December Patch Tuesday: Windows Cloud Files Mini Filter Driver hole already being exploited
Microsoft is finishing 2025 by issuing only 57 patches for Windows and other products for December Patch Tuesday, but one vulnerability is already being exploited as a zero day and needs to be addressed fast. It’s an escalation of privilege vulnerability in Windows Cloud Files Mini Filter Driver (CVE-2025-62221), described as a use-after-free problem in which a program tries to use a block of memory that has already been returned to system control. The attack complexity is low. The worst case scenario is that a threat actor could leverage it to escalate access privileges. “Elevation of privilege bugs turn a foothold into a full breach,” Satnam Narang, senior staff research engineer at Tenable, said in an email, “as attackers often use them to conduct post-compromise activity after they have gained initial access through other means, such as social engineering or exploitation of another flaw. “Windows Cloud Files Mini Filter Driver is an attractive target because it is a file system driver that enables cloud applications to access file system functionalities,” he added.  Jack Bicer, director of vulnerability research at Action1, said patching this vulnerability is “the most urgent concern” because it is actively being exploited by any attacker who can get any level of local access. “Active exploitation means real incidents are already occurring,” he pointed out. “This vulnerability is likely to be combined with phishing, browser-based attacks, malicious documents, or other initial footholds to achieve full system takeover. The attack potential includes disabling security tooling, accessing sensitive information, moving laterally across the organization’s network, and establishing persistent high-privilege access. Because the impacted driver is widely deployed across enterprise environments, the exposure is broad and the potential operational consequences significant.” IT executives should ensure operational teams allocate resources to accelerated patching, enforce least-privilege access controls, and strengthen monitoring for anomalous activity across systems that cannot be patched immediately, he stressed. “A focused, time-bound remediation plan, beginning with actively exploited and RCE vulnerabilities, will provide the greatest reduction in organizational risk and the strongest defense against potential widespread compromise,” he said. Unfortunately, said Kevin Breen, senior director of cyber threat research at Immersive, Microsoft has not provided any details on how this exploit is being abused or provided any indicators of compromise, making it harder for defenders to start proactive threat hunting.  Holes in Exchange Server Michael Walters, president of Action1, drew attention to two vulnerabilities in Microsoft Exchange Server: CVE-2025-64666, an escalation of privilege (EoP) hole allowed by improper input validation; CVE-2025-64667, which allows a threat actor to spoof over a network. While rated Important and assessed as exploitation Less/Unlikely, Walters notes that these flaws affect core messaging and identity surfaces, and can become critical when chained, such as by spoofing enabling phishing, or EoP facilitating mailbox theft. Tyler Reguly, associate director of R&D at Fortra, said CSOs should assign priority to two other vulnerabilities that Microsoft rated as critical this month. CVE-2025-62557, a use after free vulnerability in Microsoft Office that allows an unauthorized attacker to execute code locally;  CVE-2025-62554, described as an access of resource using incompatible type (‘type confusion’) hole in Microsoft Office that allows an unauthorized attacker to execute code locally.  Because these list the Outlook Preview Pane as an attack vector, they worry Reguly. “I always find that one of the scariest attack vectors that can be listed,” he said. “Vulnerabilities that don’t rely on user interaction are vulnerabilities that we want to pay attention to.” Copilot hole for those using JetBrains Breen of Immersive also said organizations using GitHub Copilot for the JetBrains application development platform should patch a hole in Copilot promptly, before threat actors find a way to exploit it.  The vulnerability report states that it’s possible to gain the ability for code execution on affected hosts by tricking the LLM into running commands that bypass the guardrails and appending instructions to the user’s “auto-approve” settings, Breen notes. This can be achieved through a Cross Prompt Injection, he said, where the prompt is modified, not by the user, but by the LLM agents as they craft their own prompts based on the content of files or data retrieved from a Model Context Protocol (MCP) server.  Although Microsoft has marked this exploitation as Less Likely, Breen said, CSOs taking a risk-based approach should note that developers typically have access to API keys and secrets that could enable a large attack surface for attackers.   SAP vulnerabilities Separately, SAP’s Security Notes for December include four HotNews Notes, two of which are given CVSS scores in the 9s: note #3685270 [CVE-2025-42880] patches a code injection vulnerability in SAP Solution Manager. According to researchers at Onapsis, a remote-enabled function module could allow an authenticated attacker to inject arbitrary code, leading to a high impact on the confidentiality, integrity, and availability of the system. The vulnerability is patched by adding appropriate input sanitization to the affected function module. Given the central role of SAP Solution Manager in the SAP system landscape, Onapsis strongly recommends that this be patched quickly; note #3685286, [CVE-2025-42928], was issued after Onapsis was able to exploit a deserialization vulnerability in the SAP jConnect SDK for Sybase Adaptive Server Enterprise (ASE) to launch remote code execution by providing specially crafted input to the component. “A successful exploit requires high privileges, preventing the vulnerability from being tagged with a CVSS score of 10.0,” Onapsis said; note #3683579 affects SAP Commerce Cloud customers. SAP Commerce Cloud uses a version of Apache Tomcat that is vulnerable to CVE-2025-55754 and CVE-2025-55752. This security note, with a CVSS score of 9.6, provides fixes that include a patched version of Apache Tomcat. If  unpatched, these flaws put the application’s confidentiality, integrity and availability at high risk, says Onapsis. note #3668705, tagged with a CVSS score of 9.9, was initially released on SAP’s November Patch Day and patches a Code Injection vulnerability in SAP Solution Manager. This note was updated with additional correction instructions.  Advice for 2026 Finally, with this last batch of patches for the year from Microsoft, Fortra’s Tyler Reguly provided some context. “In 2025, Microsoft patched 1275 vulnerabilities,” he said in an email. “Which should mean roughly 106 vulnerabilities each month, yet December only saw 70 vulnerabilities when you include the third-party CNA vulnerabilities. If all things were equal, December should account for 8.3 % of all CVEs fixed by Microsoft. Instead December only contains 5.5% of this year’s total CVEs. I suppose we can thank Microsoft for an early Christmas gift.” “If I were in charge of all aspects of security for an enterprise, as we wrap up the year and think about 2026 budgets,” he added, “I’d probably be thinking about the two critical Office vulnerabilities that impact the Preview Pane and consider the email protections that I have in place and where I can make investments in 2026 to further improve the email security of my organization. Between ‘silent attacks’ that utilize the preview pane, phishing, and all the other risks that come to us via email, it is one of the places where organizations can still do more to shore up their security posture and put themselves in a good place.”
Microsoft Patch Tuesday, December 2025 Edition
Microsoft today pushed updates to fix at least 56 security flaws in its Windows operating systems and supported software. This final Patch Tuesday of 2025 tackles one zero-day bug that is already being exploited, as well as two publicly disclosed vulnerabilities.
Exploitation of Critical Vulnerability in React Server Components (Updated December 10)
We discuss the CVSS 10.0-rated RCE vulnerability in the Flight protocol used by React Server Components. This is tracked as CVE-2025-55182. The post Exploitation of Critical Vulnerability in React Server Components (Updated December 10) appeared first on Unit 42.
North Korea-linked Actors Exploit React2Shell to Deploy New EtherRAT Malware
Threat actors with ties to North Korea have likely become the latest to exploit the recently disclosed critical React2Shell security flaw in React Server Components (RSC) to deliver a previously undocumented remote access trojan dubbed EtherRAT. "EtherRAT leverages Ethereum smart contracts for command-and-control (C2) resolution, deploys five independent Linux persistence mechanisms, and
Gemini for Chrome gets a second AI agent to watch over it
Google is deploying a second AI model to monitor its Gemini-powered Chrome browsing agent after acknowledging the agent could be tricked into taking unauthorized actions through prompt injection attacks. “We’re introducing a user alignment critic where the agent’s actions are vetted by a separate model that is isolated from untrusted content,” the company said in a blog post about the addition. If the critic determines an action doesn’t match what the user asked for, it blocks the action, Google said. “The primary new threat facing all agentic browsers is indirect prompt injection,” Chrome security engineer Nathan Parker wrote in the post, describing a situation where an agent is prompted to process information that then seeks to modify the initial prompt. The Gemini-powered browsing agent, launched in September and currently in preview, can navigate websites, click buttons, and fill forms while users are logged into email, banking, and corporate systems. Malicious instructions hidden in web pages, iframes, or user-generated content could “cause the agent to take unwanted actions such as initiating financial transactions or exfiltrating sensitive data,” Parker wrote. That’s where the user alignment critic comes in: The second model reviews each proposed action before Chrome executes it, acting as what Parker called “a powerful, extra layer of defense against both goal-hijacking and data exfiltration.” Why prompt injection is hard to stop Prompt injection has emerged as the top vulnerability in AI systems over the past year. OWASP found it in 73% of production AI deployments it assessed in 2024, ranking it the number one risk in its list of threats to large language model applications. The UK’s National Cyber Security Centre warned Sunday that prompt injection attacks may never be fully mitigated because LLMs can’t reliably distinguish between instructions and data. The agency called it a “confused deputy” vulnerability, where a trusted system is tricked into performing actions on behalf of an untrusted party. Researchers have already demonstrated the threat. In January, attackers embedded instructions in a document that caused an enterprise AI system to leak business intelligence and disable its own safety filters. Security firm AppOmni disclosed last month that ServiceNow’s AI agents could be manipulated through instructions hidden in form fields, with one agent recruiting others to perform unauthorized actions. For Chrome, the stakes are particularly high. A compromised browsing agent would have the user’s full privileges on any logged-in site, potentially bypassing the browser’s site isolation protections that normally prevent websites from accessing each other’s data. Google’s two-model defense To address these risks, Google’s solution splits the work between two AI models. The main Gemini model reads web content and decides what actions to take. The user alignment critic sees only metadata about proposed actions, not the web content that might contain malicious instructions. “This component is architected to see only metadata about the proposed action and not any unfiltered untrustworthy web content, thus ensuring it cannot be poisoned directly from the web,” Parker wrote in the blog. When the critic rejects an action, it provides feedback to the planning model to reformulate its approach. The architecture is based on existing security research, drawing from what’s known as the dual-LLM pattern and CaMeL research from Google DeepMind, according to the blog post. Google is also limiting which websites the agent can interact with through what it calls “origin sets.” The system maintains lists of sites the agent can read from and sites where it can take actions like clicking or typing. A gating function, isolated from untrusted content, determines which sites are relevant to each task. The company acknowledged this first implementation is basic. “We will tune the gating functions and other aspects of this system to reduce unnecessary friction while improving security,” Parker wrote. Beyond the user alignment critic and origin controls, Chrome will require user confirmation before the browsing agent navigates to banking or medical sites, uses saved passwords through Google Password Manager, or completes purchases, according to the blog post. The browsing agent has no direct access to stored passwords. A classifier runs in parallel checking for prompt injection attempts as the agent works. Google has built automated red-teaming systems generating malicious test sites, prioritizing attacks delivered through user-generated content on social media and advertising networks. Grappling with an unsolved problem The prompt injection challenge isn’t unique to Chrome. OpenAI has called it “a frontier, challenging research problem” for its ChatGPT agent features and expects attackers to invest significant resources in these techniques. Gartner has gone one step further and advised enterprises to block AI browsers in their systems. The research firm warned that AI-powered browsing agents could expose corporate data and credentials to prompt injection attacks. The NCSC took a similar position, urging organizations to assume AI systems will be attacked and to limit their access and privileges accordingly. The agency said organizations should manage risk through design rather than expecting technical fixes to eliminate the problem. Chrome’s agent features are optional and remain in preview, the blog post said. This article first appeared on Computerworld.
Racks, sprawl and the myth of redundancy: Why your failover isn’t as safe as you think
The physical roots of resilience Five years ago, at 2 a.m., I stood in a data center aisle watching a core switch lose a power supply. The room was cold, the fans loud and the alert light blinked amber. Within four seconds, the backup unit took over. Not a single packet dropped. That seamless, silent shift captured the essence of networking redundancy at its best: automatic, invisible and flawless. It was the kind of moment engineers live for — a quiet victory in the dark. Today, that same principle faces relentless pressure. Networks have outgrown physical racks and now span hybrid clouds, edge nodes, SD-WAN overlays, API gateways and micro-segmented virtual fabrics. Redundancy no longer means just extra hardware or twin fiber links. It demands survival against misconfigured routing policies, regional DNS outages, zero-day exploits in router firmware and cascading failures triggered by human error or supply chain compromise. The landscape has evolved dramatically, but the core lessons — built on discipline, foresight and trust — endure. My journey began with physical infrastructure, back when reliability was measured in cables and chassis. Every server connected through dual paths, with link aggregation bundles split across two top-of-rack switches, each uplinked to separate core routers over distinct fiber routes. I once spent an entire weekend labeling cables with color-coded heat shrink: red for primary, blue for backup. It was meticulous, almost meditative work. When a technician accidentally kicked a patch cord loose during a floor tile replacement, traffic shifted in under 200 milliseconds. No alarms triggered. No user complaints. The monitoring dashboard stayed green. That reliability felt like muscle memory: predictable, testable and deeply tangible. It was redundancy you could touch, trace and trust. Cloud complexity and policy traps Networks, however, no longer stay confined to racks. They live in routing tables, BGP sessions, cloud control planes and software-defined overlays. Many organizations rush to multi-region cloud setups, believing geographic distance alone guarantees resilience. It does not. Last year, I oversaw a global e-commerce platform with active-passive failover across two regions. Health checks withdrew prefixes from the primary if latency crossed 80 ms. During a routine maintenance window, a junior engineer mistyped a BGP community tag. Instead of marking one subnet, the change blocked the entire backup path with a no-export rule. Traffic surged onto an already saturated primary link, pushing packet loss to 11 percent. The backup route was healthy, advertising correctly and fully reachable — yet policy prevented its use. We corrected the error in six minutes, but customers felt the impact for nearly 40. The takeaway was stark: redundancy without aligned policies is mere decoration, expensive and useless when it matters most. This mirrors the 2024 Cloudflare 1.1.1.1 hijack incident caused by a leaked border gateway (BGP) route. As cloud environments grow, consistency becomes harder to maintain. A small template tweak in one availability zone can cascade across regions if copied unchecked, turning intended protection into widespread failure. Teams now manage configurations like code, with versioning, peer reviews, staged testing and automation to enforce uniformity. Tools like infrastructure-as-code pipelines, policy engines and drift detection systems are no longer optional — they are the new standard for scalable resilience. SD-WAN extends these challenges to branch locations, linking multiple internet paths for fluid failover and intelligent, application-aware routing. It promises simplicity and agility. Yet a single carrier firmware update can degrade performance everywhere, even when links remain active. I’ve seen MTU mismatches, encryption mismatches and path preference bugs ripple through hundreds of sites in minutes. Phased rollouts, strict change policies and gradual deployment rings prevent blanket disruption. The same discipline applies at the edge, where devices in retail stores, warehouses or remote clinics depend on local backups for speed and continuity. A rushed firmware push can erase that safety net across all units, forcing field teams to restore from USB drives or mobile hotspots. Careful staging, rollback plans and on-site recovery kits are now part of every deployment checklist. Routing mistakes and DNS breakdowns lurk as quiet, persistent risks. One errant rule can dead-end traffic and even solid backups stay idle if policies block them. Robust prefix filters, route validation and RPKI enforcement keep paths safe. Likewise, DNS backups must operate independently — free of shared anycast IPs, providers or control planes — to avoid joint collapse. Security checks, DNSSEC and diverse resolver strategies strengthen failover. These are not add-ons; they are foundational to modern network hygiene. Anticipating the inevitable: Pre-mortem and defense in depth The next outage is already taking shape, hidden until the first alert. It might hide in a supply chain flaw inside a trusted IOS-XR patch, quietly altering routes worldwide. Or it could stem from a single flawed intent policy in an ACI fabric, isolating entire application layers with surgical precision. External forces like wildfires, floods or geopolitical events can force data center evacuations, knocking out power grids and delaying generators for hours. The 2021 Fastly global outage — triggered by one valid config change exposing a hidden bug — shows how fast a CDN can collapse. These scenarios are not speculation; they are probabilities waiting to strike, each with its own failure signature. Experience reframes the question. Failure is inevitable in infrastructure work. What matters is how it unfolds, how precisely and whether the design anticipates that exact failure mode. Resilience now means shaping failure’s impact, not stopping it. This mindset demands a new ritual: the pre-mortem. In every design review, we assume total failure at peak load. We trace dependencies — transit providers, certificate authorities, undersea cables, even physical access roads. We hunt for shared fate: two “diverse” carriers in the same conduit, a single control plane for multi-region DNS or a vendor update applied globally without validation. Each discovery triggers action: a new peer, a policy rewrite, a satellite link or a dark fiber lease. AWS recommends pre-mortems in its Reliability Pillar. Two years ago, I sat in a dim network operations center at 3 a.m., cold coffee forgotten, as one BGP update spread chaos via a global transit provider. A peer leaked a default route with lower preference, sucking outbound traffic into oblivion. The backup path was fully functional, yet our policy still favored the tainted route. For 17 minutes, half the internet vanished for users. Customers raged. Executives demanded answers. A swift prefix filter fixed it, but the lesson lingered: redundancy requires not just a second path, but intelligence to choose it wisely and reject the wrong one. That night, I rewrote our change process: no routing policy touches production without simulation, peer review and automated testing. Observability unifies the picture. A consolidated view of logs, traffic flows, performance metrics and control plane health spots weakening paths before collapse, enabling fixes before users notice. Cost tensions persist. Leaders crave full redundancy yet settle for cheaper, correlated links that fail together. Genuine resilience needs true separation, geographic distance and sometimes higher budgets, all justified by the disruptions avoided. A $50,000 cross-connect can prevent a $2 million outage. The math is simple. Automation now manages routine failovers, sensing issues and shifting traffic instantly so engineers tackle root causes, not manual switches. The next disruption looms from software bugs, policy slips, physical cuts or zero-day attacks. Effective planning means expecting breakdown, mapping vulnerabilities and scripting clear recovery. In a recent breach, an attacker tried hijacking core routing via a compromised jump host. Layered defenses — RPKI, prefix filters and automated session resets — contained it. Users saw only a 40 ms blip. Redundancy had matured from spare cables into a dynamic blend of security, automation and vigilance. The foundational principles hold: remove single points of failure, secure real separation, automate responses and monitor relentlessly. The scale has ballooned — from patch panels to cloud regions, from local switches to global routes — but the mission stays constant: keep data moving regardless of obstacles. Outages will come. They always do. But with redundancy woven into a tested, trusted and adaptable network, their sting will fade and the packets will keep flowing. This article is published as part of the Foundry Expert Contributor Network.Want to join?
Four Threat Clusters Using CastleLoader as GrayBravo Expands Its Malware Service Infrastructure
Four distinct threat activity clusters have been observed leveraging a malware loader known as CastleLoader, strengthening the previous assessment that the tool is offered to other threat actors under a malware-as-a-service (MaaS) model. The threat actor behind CastleLoader has been assigned the name GrayBravo by Recorded Future's Insikt Group, which was previously tracking it as TAG-150. The
NIS2 umsetzen – ohne im Papierkrieg zu enden
srcset="https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?quality=50&strip=all 6173w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=300%2C168&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=768%2C432&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=1024%2C576&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=1536%2C864&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=2048%2C1152&quality=50&strip=all 2048w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=1240%2C697&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=150%2C84&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=854%2C480&quality=50&strip=all 854w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=640%2C360&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2082667993.jpg?resize=444%2C250&quality=50&strip=all 444w" width="1024" height="576" sizes="auto, (max-width: 1024px) 100vw, 1024px">Die EU-Richtline NIS2 ist in Deutschland am 06. Dezember 2025 in Kraft getreten. Dieser Beitrag zeigt, wie sich mit DevSecOps ein Großteil der Pflichtarbeit automatisieren lässt.Vadi Fuoco – shutterstock.com NIS2 ist symbolisch für das Kernproblem europäischer Richtlinien und Verordnungen: Sie erzeugen unnötigen Papierkrieg und entfalten ihre Wirkung zu selten. Sei es das Lieferkettengesetz, die DSGVO‑Folgenabschätzungen oder das IT‑Sicherheitsgesetz – sie haben gemeinsam, dass Unternehmen gigantische Dokumentationsberge produzieren müssen. Diese erhöhen weder die tatsächliche Sicherheit, noch sind sie realistisch prüfbar. Compliant ist in der Regel derjenige, der eine umfangreiche Dokumentation aller Prozesse und regelmäßigen Prüfungen vorlegen kann. Diese sind zumeist so ausführlich, dass ihre Erstellung bereits nahezu unzumutbare Aufwände verursacht und ihre manuelle Prüfung praktisch unmöglich wird. Selbst wenn man sie prüfen würde, wären die Informationen nicht präzise genug, um echte Sicherheit zu belegen. Sicherheit gehört in die Planung In vielen Unternehmen entsteht dadurch eine absurde Praxis: Das technische Team baut funktionierende Infrastruktur und losgelöst davon schreibt ein Compliance‑Beauftragter im Nachhinein eine seitenlange Rechtfertigung, warum die Lösung angeblich sicher sei. Das ist ungefähr so, als würde Volkswagen ein Auto bauen und erst danach verfasst jemand 40 Seiten darüber, warum dieses Auto den Sicherheitsstandards entsprechen sollte. In der realen Industrie läuft es natürlich anders: Sicherheitsanforderungen fließen bereits in die Planung ein, technologische Mindeststandards sind definiert, und Qualitätsprozesse überwachen die Umsetzung automatisch. Compliance ergibt sich aus Technik – nicht aus Leitz‑Ordnern. In anderen Bereichen, wie der Steuerprüfung, hat man dieses Problem längst erkannt und die Automatisierung relevanter Prozesse gesetzlich vorgeschrieben (Stichwort: elektronische Registrierkasse, revisionssichere Buchhaltungssoftware). Das erspart ehrlichen Unternehmern nicht nur enorme manuelle Arbeit, sondern reduziert vor allem das Missbrauchsrisiko. Leider werden in Deutschland nur wenige Dinge so konsequent umgesetzt wie das Eintreiben unserer Steuern. Anders als beim Thema Steuerlast sollten Unternehmen jedoch ein intrinsisches Interesse daran haben, ihre IT‑Sicherheit korrekt zu implementieren. Das Bußgeld für einen NIS2‑Verstoß kann bis zu zehn Millionen Euro oder zwei Prozent des weltweiten Jahresumsatzes betragen. Die wirtschaftlichen Schäden erfolgreicher Cyberangriffe sind oft existenzbedrohend und summieren sich bereits heute auf dreistellige Milliardenbeträge pro Jahr. Auch wenn es nicht ausdrücklich gesetzlich vorgeschrieben ist, gibt es mittlerweile – nicht zuletzt durch AI‑gestützte Werkzeuge – die Möglichkeit, Sicherheitsprozesse und ihre vollständige Dokumentation so weit zu automatisieren, dass sich Security, Compliance und Auditierbarkeit in einem einzigen technischen Prozess vereinen lassen. Das spart nicht nur Ressourcen, sondern erhöht auch die tatsächliche Sicherheit. Wie dies im Detail aussehen kann, zeigt ein Beispiel einer SaaS‑Applikation in der Cloud.   IT im Wandel: von Textdokumenten zu deklarativer Technik NIS2 verlangt im Kern drei Dinge: konkrete Sicherheitsmaßnahmen, Prozesse und Richtlinien zur Steuerung dieser Maßnahmen sowie belastbare Nachweise, dass sie im Alltag funktionieren. Die Prozessdokumentation – also Policies, Zuständigkeiten und Abläufe – ist für die meisten größeren Unternehmen nichts grundsätzlich Neues. ISO‑27001‑basierte Informationssicherheits-Managementsysteme (ISMS), HR‑Prozesse und Management‑Handbücher existieren oft seit Jahren. Entscheidend für NIS2 sind deshalb vor allem zwei Ebenen: die technischen Maßnahmen und die Evidenz, dass sie wirksam sind. Genau hier zeigt sich der Umbruch der letzten Jahre. Früher wurden Konzepte, Maßnahmen und Spezifikationen von Software‑ und IT‑Infrastrukturen überwiegend in Textform dokumentiert. Programmcode war zu komplex, Konfigurationen lagen verstreut in Dateien, Ticketsystemen oder im Kopf einzelner Administratoren. Im Nachgang hat man Dokumente geschrieben – häufig durch fachfremde Kollegen. Dieses Vorgehen war vor allem aus zwei Gründen problematisch: Es skaliert nicht in wachsenden, verteilten Umgebungen, und es passt nicht zu dem Ziel, technische Prozesse konsequent zu automatisieren. In modernen Systemen setzt man deshalb auf Verfahren wie Test‑ oder Behaviour‑driven Development und Infrastructure as Code (IaC), die – konsequent angewendet – textuelle Dokumentation weitgehend ersetzen. Die von NIS2 geforderten technischen Spezifikationen können direkt auf diese Artefakte referenzieren: IaC‑Definitionen legen Verschlüsselung, Netzsegmente oder Backup‑Szenarien fest, und CI/CD‑Pipelines spielen sie revisionssicher in die Produktion aus. Änderungen sind damit nicht nur technisch exakt beschrieben, sondern über Commits und Deployments auch zeitlich nachvollziehbar. Die Evidenz für Aspekte, die sich nicht vollständig deklarativ fassen lassen – etwa die Sicherheit der Software‑Supply‑Chain oder des Anwendungscodes – kann über Security‑Checks in der CI/CD‑Pipeline und eine laufende Bewertung durch SIEM‑ und CNAPP‑Systeme abgebildet werden. Wie das konkret aussehen kann, zeigt sich besonders deutlich in folgenden Bereichen: Identity & Access Management, Schwachstellenmanagement in der Software‑Supply‑Chain sowie im Monitoring, Incident Handling und Meldepflichten.   Identity & Access Management: Policies as Code statt Rollen‑Excel Identity & Access Management ist eine der zentralen Säulen von NIS2. Gefordert sind nicht nur „irgendwelche“ Rollen, sondern ein Zugriffskonzept nach Need‑to‑know, Least Privilege und Separation of Duties. In der Praxis lässt sich das gut in drei Ebenen denken: bewusste Vergabe von Rechten, ein realistischer Lebenszyklus dieser Rechte – und eine Architektur, die Lateral Movement so weit wie möglich verhindert. Statt Berechtigungen in Excel, Admin‑UIs und verstreuten Wikis zu pflegen, werden Rollen und Zugriffsrechte als Policies as Code, beziehungsweise Infrastructure as Code definiert – etwa als Terraform‑Module oder JSON/YAML‑Policies in einem Git‑Repository. Alle Änderungen laufen ausschließlich über Merge Requests und werden über eine CI/CD‑Pipeline ausgerollt. Damit ist klar nachvollziehbar, wer welche Rechte geändert hat, wer das freigegeben hat und wann die Änderung produktiv gegangen ist. Die Dokumentations‑ und Nachweispflichten von NIS2 ergeben sich so direkt aus Git‑History und Pipeline‑Logs, ohne dass jemand zusätzliche Word‑Konzepte schreiben muss. Ein Rollenmodell allein ist noch kein Least Privilege. NIS2 verlangt, dass Rechte regelmäßig überprüft und überflüssige Berechtigungen entfernt werden. In Cloud‑Umgebungen mit hunderten Accounts, Services, Pods und Functions ist das manuell kaum noch handhabbar. Hier setzen Cloud‑Identity‑Entitlement‑Management‑Systeme (CIEM) an. Sie lesen alle effektiven Berechtigungen aus der Umgebung aus, korrelieren sie mit Audit‑Logs und zeigen, welche Rechte tatsächlich genutzt werden und wo Überprivilegierung besteht. Besonders bei Non‑Human Identities (Service‑Accounts, Workloads) ist das entscheidend, weil genau hier oft sehr breite Rechte vergeben werden, die Angreifern später als Sprungbrett dienen. Einige Start-Ups bieten mittlerweile sogar CIEM-Systeme, welche mit Hilfe von AI automatisch IAM-Policies für die entsprechenden Rollen generieren können.   Schwachstellenmanagement & Software‑Supply‑Chain: SBOM statt Scanner‑PDF Der zweite Block, den NIS2 und die neue Durchführungsverordnung 2024/2690 für digitale Dienste scharf stellen, ist das Schwachstellenmanagement im eigenen Code und in der Lieferkette. Gefordert sind regelmäßige Vulnerability‑Scans, Verfahren zur Bewertung und Priorisierung, fristgerechte Behandlung kritischer Schwachstellen sowie ein geregeltes Vulnerability‑Handling und – wo nötig – Coordinated Vulnerability Disclosure. Für Cloud‑ und SaaS‑Provider kommen Supply‑Chain‑Pflichten hinzu, etwa gegenüber Cloud‑, CI/CD‑ und Registry‑Dienstleistern. Im klassischen Schwachstellenmanagement werden SCA‑, SAST‑ und DAST‑Scanner einfach „über alles drüber geworfen“. Das Ergebnis sind endlose Listen an Findings, von denen ein Großteil Fehlalarme oder für das konkrete System nicht relevant ist. Diese Daten landen dann in Excel‑Tabellen oder einer Schwachstellendatenbank, in der Teams versuchen, Prioritäten zu vergeben. Gerade bei Zero‑Day‑Lücken führt das zu hektischen Ad‑hoc‑Analysen: Welche unserer Komponenten sind betroffen? Ist die Schwachstelle in unserer Architektur überhaupt ausnutzbar? Was tun wir, solange es noch keinen Patch gibt? Der moderne Ansatz ist, alle DevSecOps‑Findings in einem zentralen System zu konsolidieren. Dort fließen Ergebnisse aus SCA, SAST und DAST zusammen, werden mit Kontext aus Software Bill of Materials (SBOMs), Architektur und Exponiertheit angereichert und mit Hilfe von AI vorgefiltert. False Positives lassen sich so drastisch reduzieren, und übrig bleibt eine deutlich kleinere Menge an tatsächlich relevanten Schwachstellen, inklusive einer Einschätzung, wie kritisch sie im konkreten Setup sind. Diese verdichteten Findings können direkt in Ticketsysteme und ins SOC weitergegeben werden, wo sie wie Incidents behandelt, nachverfolgt und für NIS2‑Reports ausgewertet werden. Aus einem wuchernden Scanner‑Output wird so ein steuerbarer Prozess, der sowohl die gesetzlichen Anforderungen als auch die Realität im Betrieb abbildet.   Monitoring, Incident‑Handling und Meldestelle Der dritte Bereich, in dem NIS2 schnell zum Papiertiger wird, ist die Kombination aus Monitoring, Incident Response und den neuen Meldepflichten. Die Richtlinie gibt klare Deadlines vor: Frühwarnung innerhalb von 24 Stunden, eine strukturierte Meldung nach 72 Stunden, ein Abschlussbericht nach spätestens einem Monat. Viele Organisationen reagieren darauf, indem sie neue Templates, Excel‑Listen und Meldehandbücher bauen – oft weitgehend losgelöst vom bestehenden SOC. Im Ernstfall bedeutet das: Das SOC bekämpft den Vorfall, während parallel eine „NIS2‑Taskforce“ versucht, Informationen aus Tickets, Mails und Ad‑hoc‑Chats so aufzubereiten, dass sie in ein Formular passen. Die Folge sind doppelte Arbeit, Informationsverluste und Berichte, die zwar Seiten füllen, aber wenig darüber sagen, wie gut Detection und Response tatsächlich funktionieren. In einer Cloud‑SaaS‑Umgebung bietet sich ein anderer Weg an: Statt NIS2‑Reporting als eigenes Dokumentenprojekt zu verstehen, wird ein modernes DevSecOps‑basiertes SOC aufgebaut, so dass alle sicherheitsrelevanten Signale von vornherein an einem Ort zusammenlaufen: Cloud‑Infrastruktur, CI/CD‑Pipelines, Anwendungen, IdP und IAM. Die Regeln, nach denen diese Daten korreliert, angereichert und in Incidents überführt werden, sind als Code definiert und versioniert. T Detection‑Logik (Threat Detection and Response), Schwellenwerte und Playbooks liegen im Repository und werden wie Anwendungscode über Pipelines ausgerollt. Große Teile der klassischen SOC‑Arbeit lassen sich damit automatisieren: Aus Roh‑Logs werden konsistente Incidents mit Kontext, ohne dass jemand manuell Textbausteine zusammenkopieren muss. CNAPP (Cloud-Native Application Protection Platform ) und ähnliche Plattformen übernehmen gleichzeitig Speicherung und Archivierung der Daten, sodass der Nachweis der Überwachungstätigkeit im System mitläuft, statt in gesonderten Doku‑Schleifen erzeugt zu werden. Machine‑Learning‑ und AI‑Komponenten helfen zusätzlich, False Positives zu reduzieren, ähnliche Ereignisse zu clustern und auffällige Muster hervorzuheben – das SOC konzentriert sich auf die wenigen Vorfälle, die wirklich Aufmerksamkeit brauchen. Auf Prozessebene bleiben Playbooks und Meldewege wichtig – aber schlank. Ein IR‑Playbook definiert Incident‑Klassen, Eskalationspfade und Kommunikationsregeln, inklusive der Kriterien, ab wann ein Vorfall als „NIS2‑signifikant“ gilt. Ein Meldeprozess regelt, wer die Informationen aus SOC und Fachbereichen konsolidiert und über die BSI‑Meldestelle einreicht. Die eigentliche Dokumentation entsteht auch hier im Wesentlichen automatisch: Incident‑Tickets enthalten Timeline, betroffene Services, Impact, Ursache und Maßnahmen; ein Kennzeichen „NIS2‑relevant“ und ein Meldestatus verknüpfen sie mit den externen Berichten. Aus SIEM‑ und IR‑Daten lassen sich Kennzahlen wie MTTD, MTTR oder die Zeit zwischen Detection und Erstmeldung direkt berechnen – genau die Größen, an denen sich ablesen lässt, ob NIS2 gelebter Prozess ist oder nur eine neue Schublade im Dokumentenschrank. NIS2 als Architektur‑Test, nicht nur als Doku‑Übung NIS2 zwingt Unternehmen, ihre Sicherheitsmaßnahmen, Prozesse und Nachweise explizit zu machen. Das ist unbequem – gerade für Organisationen, die bisher stark ad hoc gearbeitet haben. Ob daraus ein Papiertiger oder ein echter Sicherheitsgewinn wird, entscheidet sich aber nicht im Gesetzestext, sondern in der Architektur. Wer versucht, die Richtlinie vor allem mit Word, PowerPoint und Excel „wegzudokumentieren“, wird viel Aufwand und wenig Resilienz produzieren. Werden hingegen IdP und IAM, CI/CD‑Pipelines, SBOM‑ und Vulnerability‑Tools, SIEM und IR‑Plattform so aufgesetzt, dass sie die geforderten Controls und Nachweise quasi nebenbei liefern, bekommt man NIS2‑Compliance als Nebeneffekt einer modernen Security‑Landschaft. (jm)
Storm-0249 Escalates Ransomware Attacks with ClickFix, Fileless PowerShell, and DLL Sideloading
The threat actor known as Storm-0249 is likely shifting from its role as an initial access broker to adopt a combination of more advanced tactics like domain spoofing, DLL side-loading, and fileless PowerShell execution to facilitate ransomware attacks. "These methods allow them to bypass defenses, infiltrate networks, maintain persistence, and operate undetected, raising serious concerns for
How to Streamline Zero Trust Using the Shared Signals Framework
Zero Trust helps organizations shrink their attack surface and respond to threats faster, but many still struggle to implement it because their security tools don’t share signals reliably. 88% of organizations admit they’ve suffered significant challenges in trying to implement such approaches, according to Accenture. When products can’t communicate, real-time access decisions break down. The
Google Adds Layered Defenses to Chrome to Block Indirect Prompt Injection Threats
Google on Monday announced a set of new security features in Chrome, following the company's addition of agentic artificial intelligence (AI) capabilities to the web browser. To that end, the tech giant said it has implemented layered defenses to make it harder for bad actors to exploit indirect prompt injections that arise as a result of exposure to untrusted web content and inflict harm. Chief
STAC6565 Targets Canada in 80% of Attacks as Gold Blade Deploys QWCrypt Ransomware
Canadian organizations have emerged as the focus of a targeted cyber campaign orchestrated by a threat activity cluster known as STAC6565. Cybersecurity company Sophos said it investigated almost 40 intrusions linked to the threat actor between February 2024 and August 2025. The campaign is assessed with high confidence to share overlaps with a hacking group known as Gold Blade, which is also
Ermittler kappen Tausende Nummern von mutmaßlichen Betrügern
srcset="https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?quality=50&strip=all 6240w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=300%2C168&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=768%2C432&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=1024%2C576&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=1536%2C864&quality=50&strip=all 1536w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=2048%2C1152&quality=50&strip=all 2048w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=1240%2C697&quality=50&strip=all 1240w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=150%2C84&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=854%2C480&quality=50&strip=all 854w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=640%2C360&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2025/12/shutterstock_2290639589.jpg?resize=444%2C250&quality=50&strip=all 444w" width="1024" height="576" sizes="auto, (max-width: 1024px) 100vw, 1024px">Mehr als 3.500 von Cyberkriminellen genutzte Rufnummern wurden abgeschaltet. fongbeerredhot – shutterstock.com Im Kampf gegen Anlagebetrüger, «Enkeltrick»-Kriminelle und falsche Polizisten ist den Ermittlern nach eigenen Angaben ein großer Schlag gelungen. Die Infrastruktur der mutmaßlichen Cyberkriminellen sei erheblich geschwächt worden, teilten das bei der Generalstaatsanwaltschaft Karlsruhe eingerichtete Cybercrime-Zentrum Baden-Württemberg, das baden-württembergische Landeskriminalamt (LKA) und die Bundesanstalt für Finanzdienstleistungsaufsicht (Bafin) gemeinsam mit. Die Ermittler nahmen demnach Rufnummern ins Visier, die im Zusammenhang mit betrügerischen Online-Plattformen stehen sollen. Bis zum 5. Dezember seien mehr als 3.500 überwiegend deutsche Nummern ausgemacht worden, über die mutmaßlich Telefonate mit Opfern geführt wurden. Diese Festnetz-, Handy – und Internetnummern wurden inzwischen von den zuständigen Anbietern abgeschaltet. Zusätzlich seien gut 350 österreichische Nummern in Abstimmung mit den Wiener Behörden vom Netz genommen worden. “Kriminelle Dienstleister” im Visier Beim Online-Anlagebetrug handeln die meist unbekannten Täter international und arbeitsteilig. So sollen möglichst viele Anlegerinnen und Anleger in die Falle gelockt werden. Rufnummern werden demnach vielfach an Betrugsnetzwerke vermietet und massenweise genutzt, um Straftaten zu begehen. Das Vorgehen bezeichnen die Ermittlungsbehörden als “Crime as a Service” – also kriminelle Dienstleistungen. Die nun gesperrten Nummern stehen auch im Verdacht, für Maschen wie «Enkeltrick» und “Falsche Polizisten” genutzt worden zu sein. Das Ziel der Operation Herakles sei es, die technische Infrastruktur, die Cyber-Betrüger zur Umsetzung ihrer Taten nutzen, langfristig zu zerstören und so Verbraucherinnen und Verbraucher in Deutschland zu schützen. Bereits im Juni und Oktober dieses Jahres waren im Rahmen derselben Operation mehr als 2.200 Internetseiten abgeschaltet worden, die Menschen zu vermeintlichen Investitionen auf manipulierten Handelsplattformen verleitet sollten.  Deutschland soll für Betrüger unwirtschaftlich werden Mit der Nummern-Abschaltung wurden Generalstaatsanwalt Jürgen Gremmelmaier zufolge Tausende potenzielle Betrugsversuche verhindert. So entziehe man den Cyberkriminellen aktiv die Grundlage ihres Handelns. Der Präsident des Landeskriminalamts Baden-Württemberg, Andreas Stenger, betonte die strategische Wirkung der Operation: “Um dagegenzuhalten, müssen die Täter einen immensen organisatorischen Aufwand betreiben, der mit erheblichen Kosten verbunden ist”, teilte er mit. Deutschland solle so für solche Dienste unwirtschaftlich und dadurch unattraktiv werden. (dpa/jm)
Researchers Find Malicious VS Code, Go, npm, and Rust Packages Stealing Developer Data
Cybersecurity researchers have discovered two new extensions on Microsoft Visual Studio Code (VS Code) Marketplace that are designed to infect developer machines with stealer malware. The VS Code extensions masquerade as a premium dark theme and an artificial intelligence (AI)-powered coding assistant, but, in actuality, harbor covert functionality to download additional payloads, take
Ignoring AI in the threat chain could be a costly mistake, experts warn
As AI adoption accelerates across enterprises — and among digital adversaries — a heated debate has erupted over whether AI’s role in the cyber threat chain should be a top concern for CISOs. A vocal handful of experts, along with one cybersecurity vendor, insist that warnings about AI-enhanced threats are exaggerated hype pushed by cyber-intel firms and AI companies eager to sell new defensive tools. “You have all these people worrying about hypothetical scenarios in which AI just magically bypasses all cybersecurity policies and technologies,” Marcus Hutchins, principal threat researcher at Expel, tells CSO. “What you actually have is executives moving away from tried and tested cybersecurity policies, tools, and mitigations, and gravitating toward generative AI products that are unproven and most likely aren’t going to work when it actually comes down to it.” But most frontline practitioners and veteran threat-intel leaders sharply disagree. They argue that AI-assisted threats are not speculative — they’re already here — and that dismissing them puts organizations at risk as increasingly agile adversaries experiment with AI to speed and scale their attacks. “We are absolutely seeing AI used in capabilities that traditional malware doesn’t have,” Steve Stone, SVP of threat discovery and response at SentinelOne, tells CSO. “We see AI being used to refine malware much quicker, used as a sidekick to generate code, or deployed for social engineering. Across the attack lifecycle, attackers are using AI.” Two recent research reports underscore the view that AI is a growing — and potentially more dangerous — part of the cyberattack cycle, and suggest that CISOs might be running out of time to assess how well they can defend against adversaries who currently hold a significant speed advantage. Evidence of AI usage in the attack chain is mounting Although many leading cybersecurity and AI companies, including Microsoft and OpenAI, have issued reports detailing how AI can enhance cyberattacks, two recent research reports add weight to this view, suggesting that adversaries are moving beyond AI for simple productivity gains and beginning to integrate it more directly into their operational tooling. On Nov. 5, Google Threat Intelligence Group (GTIG) released a report concluding that threat actors have entered a new operational phase of AI abuse, extending beyond the traditional productivity use of AI to create better phishing emails or write code faster, and are using tools that dynamically alter behavior mid-execution. According to the report, “government-backed threat actors and cyber criminals are integrating and experimenting with AI across the industry throughout the entire attack lifecycle.” Google identified five recent malware samples that were developed using AI, including the first use of “just in time” AI in experimental malware families, such as PROMPTFLUX and PROMPTSTEAL, that use large language models (LLMs) during execution. “Productivity tools are probably, in terms of the overall picture, the biggest slice of the pie that we’re seeing today, in terms of how [threat actors] are using LLMs and other gen AI tools for enabling their own capabilities,” Billy Leonard, GTIG’s global head of analysis of state-sponsored hacking and threats, tells CSO. Leonard sees a day coming soon when threat actors engage in prompt injection, where they manipulate an AI’s model input to leak information or generate harmful content. So far, the AI-assisted attacks his group has witnessed don’t reach these highly sophisticated levels. But, he warns, “we should expect to start seeing threat actors deploying their own AI agents, which gets us closer to that sort of autonomous system [attacks that some fear]. There are a number of open-source tools now for doing AI red teaming and other things. Threat actors are likely using those for non-red teaming purposes. Over the next 12 months, we should start to see more of that.” The Google report came under initial criticism as fostering needless fear by Hutchins and other researchers, although Hutchins, for one, later retracted his complaints, suggesting how uncharted the new AI cyber threat terrain is. “The research report we released was used as both the talking point for the AI [cyber threat] is garbage camp as well as the sky is falling AI viewpoint,” Leonard says. “They both pointed to the same report and the same findings as their justification for their side of the argument. It’s like, alright, you got to pick a side.” Just a week after GTIG issued its report, on Nov. 13, AI company Anthropic issued a bombshell report in which it claimed to have discovered the first orchestrated cyber espionage by a Chinese state-sponsored group that manipulated the company’s Claude Code tool into trying to infiltrate around 30 global targets, succeeding in a small number of cases. The attack relied on several features of AI models that did not exist, or were in much more nascent form, just a year ago, according to Anthropic, even though much of the attack involved traditional human intervention at various stages during the process. Anthropic said it is sharing this case publicly to help “those in industry, government, and the wider research community strengthen their own cyber defenses.” Critics of AI-enabled threat reports quickly seized on Anthropic’s decision not to release indicators of compromise (IOCs), claiming the omission undercuts the value of the research. But experienced threat leaders say this criticism misunderstands the nature of AI-driven attacks — and the realities of disclosure. “Researchers always want to see all the IOCs,” Morgan Adamski, PwC principal and former executive director of US Cyber Command, tells CSO. “But there might be very specific reasons those weren’t included. Detailing how an adversary actually conducted it could essentially give the playbook to our adversaries.” Rob T. Lee, chief AI officer at the SANS Institute, is even more blunt. “Anthropic is not a cybersecurity company like Mandiant or Google, so give them a break. And what indicators of compromise are actually going to help defenders? If they were very clear about how they detected this, that’s on their end. So what are they supposed to do — release IOCs only they can use? It’s ridiculous.” For its part, Anthropic is playing its cards close to the vest. “Releasing IOCs, prompts, or technical specifics can give threat actors a playbook to use more widely,” the company tells CSO. “We weigh this tradeoff case by case, and in this instance, we are sharing directly with industry and government partners rather than publishing broadly.” How CISOs could cut through the confusion The conflicting narratives around AI threats leave many CISOs struggling to reconcile hype with operational reality. Given the emergence of AI-enabled cyber threats amid pushback from some cyber experts who contend these threats are not real, Sophos CEO Joe Levy tells CSO that AI is becoming a “Rorschach test, meaning that however individuals will choose to look at it, that is the pattern that they will find there.” However, Levy cautions that leaders need to take a more balanced view of the situation. “There is indeed novelty in the use of AI and the threat of agentic AI being used in a much more scalable way by attackers than we’ve seen through previous forms of either manual attacks or even automated attacks,” he says. “That element of it is certainly real. But I don’t think to this point we’ve seen a significant escalation that inhibits our ability to use our current set of defenses to the same level of effectiveness.” PwC’s Adamski stresses that CISOs should be prepared to turn around new defenses on a dime, given how fast the new AI era will be. “From a defensive perspective, it’s going to have to be seconds,” she says. She also believes it’s important to dispel any confusion that AI threats are not real. “The bottom line is that it is an emerging technology and capability that our adversaries can leverage. It exists, and we know that there are people out there testing it, deploying it, and quite honestly being successful in its use,” she says. Clyde Williamson, senior product security architect at Protegrity, agrees that it’s dangerous to assume attackers won’t exploit generative AI and agentic tools. “Anybody who has that hacker mindset when presented with an automation tool like what we have now with generative AI and agentic models, it would be ridiculous to assume that they’re not using that to improve their skills,” he tells CSO. Jimmy Mesta, CTO and co-founder of RAD Security, says CISOs should be preparing their boards now for difficult budget decisions. “Boards will have to be presented with the options of being insecure or being secure, what it’s going to cost, and what it’s going to take,” he tells CSO. “CISOs aren’t going to be able to walk in and say we must do everything to 100%. There will be more trade-offs than ever.” Even as CISOs prepare for the coming wave of AI-assisted attacks, they must maintain focus on cybersecurity fundamentals, Alexandra Rose, global head of government partnerships and director of CTU threat research at Sophos, tells CSO. “We come back to the basics so often because they’re the most effective at stopping what we see — from every level of sophistication, including threat actors experimenting with AI,” she says.
Manufacturing fares better against ransomware — with room for improvement
The manufacturing industry is performing better in protecting itself against ransomware, according to a recent study from security provider Sophos. Compared to previous years’ results, many manufacturing companies are now able to stop ransomware attacks before data is encrypted. This year just 40% of cyberattacks against manufacturing entities resulted in data encryption. This is the lowest figure in five years and a decrease from 74% in 2024, Sophos reports. However, data theft remains a key risk in the sector, with 39% of manufacturers whose data was encrypted by ransomware also suffering data loss — one of the highest rates of all industries surveyed. One consequence, according to the study, is that more than half of the affected companies paid the ransom despite improved defense measures. The median ransom amount was around €861,000, compared to a median demand of approximately €1 million. Skilled labor shortages and inadequate protection facilitate attacks More than four in 10 manufacturing companies (43%) cited a lack of expertise as the reason for the cyber incident. Unknown security vulnerabilities were mentioned by 42%, and a lack of protective measures by 41%. Furthermore, the results show that ransomware attacks continue to place a heavy burden on IT and security teams. Almost half of manufacturing companies (47%) reported increased stress within their teams following data encryption. Meanwhile, 44% are experiencing increased pressure from management, and 27% confirmed a change in leadership as a result of the attack — a proportion in line with overall trends for security leaders losing their jobs after a ransomware attack. The study surveyed 332 manufacturing companies worldwide that were affected by ransomware in the past year. See also: 8 biggest cybersecurity threats manufacturers face Manufacturers still poorly prepared for cyberattacks as IT/OT converge
December 2025 Patch Tuesday: One Critical Zero-Day, Two Publicly Disclosed Vulnerabilities Among 57 CVEs
GenAI-Security als Checkliste
Das Open Web Application Security Project (OWASP) gibt Unternehmen eine Checkliste für (mehr) GenAI-Sicherheit an die Hand. Foto: Gannvector | shutterstock.comWährend Unternehmen wie OpenAI, Anthropic, Google oder Microsoft aber auch Open-Source-Alternativen bei ihren Generative-AI– und Large-Language-Model-Angeboten exponentielle User-Zuwächse verzeichnen, sind IT-Sicherheitsentscheider bemüht, mit der rasanten KI-Entwicklung in ihren Unternehmen Schritt zu halten. Die Non-Profit-Organisation OWASP trägt dieser Entwicklung mit einer neuen Veröffentlichung Rechnung: der “LLM AI Cybersecurity & Governance Checklist“. LLM-Bedrohungskategorien Das Thema KI ist ziemlich umfangreich, weswegen die OWASP-Checkliste vor allem darauf abzielt, Führungskräfte dabei zu unterstützen, die wesentlichen Risiken im Zusammenhang mit generativer KI und großen Sprachmodellen möglichst schnell zu identifizieren und entsprechende Abhilfemaßnahmen einzuleiten. Das soll gewährleisten, dass Unternehmen über die nötigen, grundlegenden Sicherheitskontrollen verfügen, um generative KI und LLM-Tools, -Services und Produkte sicher einzusetzen. Dabei betont OWASP, dass die Checkliste keinen Anspruch auf Vollständigkeit erhebt und sich mit zunehmender Reife der Technologie und Tools ebenfalls weiterentwickeln wird. Die Sicherheitsexperten ordnen LLM-Bedrohungen in verschiedene Kategorien ein, wie die nachfolgende Abbildung veranschaulicht: Die OWASP KI-Bedrohungs-Map. Foto: OWASPGeht es darum, eine LLM-Strategie festzulegen, müssen Unternehmen vor allem mit den einzigartigen Risiken umgehen, die generative KI und LLMs aufwerfen. Diese müssen durch organisatorische Governance und entsprechende Security-Kontrollen minimiert werden. Im Rahmen ihrer Veröffentlichung empfehlen die OWASP-Experten Unternehmen einen sechsstufigen Ansatz, um eine wirksame LLM-Strategie zu entwickeln: Mit OWASP in sechs Schritten zum LLM-Deployment. Foto: OWASPAuch hinsichtlich der Deployment-Typen in Sachen LLM empfiehlt OWASP, ganz genau hinzusehen und entsprechende Überlegungen anzustellen: Welche Art von KI-Modell ist für Sie die richtige? Foto: OWASPDie OWASP-KI-Checkliste Im Folgenden haben wir die von OWASP veröffentlichte Checkliste etwas “aufgedröselt”. Folgende Bereiche sollten Sie im Rahmen Ihrer Generative-AI- respektive LLM-Initiativen unbedingt prüfen. Adversarial Risk Dieser Bereich umfasst sowohl Wettbewerber als auch Angreifer und konzentriert sich nicht nur auf die Angriffs-, sondern auch auf die Unternehmenslandschaft. In diesen Bereich fällt beispielsweise, zu verstehen, wie die Konkurrenz KI einsetzt, um bessere Geschäftsergebnisse zu erzielen und die internen Prozesse und Richtlinien (beispielsweise Incident-Response-Pläne) zu aktualisieren, um für Cyberangriffe und Sicherheitsvorfälle im Zusammenhang mit generativer KI gewappnet zu sein. Threat Modeling Die Bedrohungsmodellierung gewinnt im Zuge des von zahlreichen Security-Institutionen propagierten “Secure-by-Design”-Ansatzes zunehmend an Bedeutung. In diesen Bereich fallen etwa die Überlegungen, wie Angreifer LLMs und generative KI für schnellere Exploits nutzen können, wie Unternehmen schadhafte KI-Nutzung erkennen können und wie sich die Technologie über interne Systeme und Umgebungen absichern lässt. KI-Bestandsaufnahme “Man kann nichts schützen, von dessen Existenz man nichts weiß” greift auch in der Generative-AI-Welt. Im Bereich der KI-Bestandsaufnahme geht es darum, Assets für intern entwickelte Lösungen und externe Tools und Plattformen zu erfassen. Dabei ist nicht nur wichtig, die Tools und Services zu kennen, die genutzt werden, sondern auch über die Verantwortlichkeiten Bescheid zu wissen. OWASP empfiehlt zudem, KI-Komponenten in SBOMs zu erfassen und Datenquellen nach Sensibilität zu katalogisieren. Darüber hinaus sollte es auch einen Prozess geben, der gewährleistet, dass zukünftige Tools und Services aus dem unternehmerischen Inventar sicher ein- und ausgegliedert werden können. KI-Security- und -Datenschutz-Schulungen Der Mensch ist das schwächste Glied in der Sicherheitskette – heißt es oft. Das muss allerdings nicht so sein – vorausgesetzt, Unternehmen integrieren KI-Sicherheits- und Datenschutztrainings in ihre GenAI-Journey. Das beinhaltet beispielsweise, der Belegschaft ein Verständnis über aktuelle AI- und LLM-Initiativen zu vermitteln – genauso wie zur Technologie an sich und den wesentlichen Problemen im Bereich Security. Darüber hinaus ist in diesem Bereich eine Kultur unabdingbar, die von Trust und Transparenz geprägt ist. Das ist auch ein ganz wesentlicher Punkt, um “Schatten-KI” zu verhindern. Anderenfalls werden Plattformen heimlich genutzt und die Security untergraben. Business Cases für KI etablieren Ganz ähnlich wie zuvor bei der Cloud erstellen die meisten Unternehmen keine kohärenten, strategischen Geschäftsmodelle für den Einsatz neuer Technologien – auch nicht, wenn es um generative KI und LLMs geht. Sich von Hype und FOMO anstecken zu lassen, ist relativ schnell geschehen – ohne soliden Business Case riskieren Unternehmen aber nicht nur, schlechte Ergebnisse zu erzielen. Governance Ohne Governance ist es nahezu unmöglich, Rechenschaftspflicht und klare Zielsetzungen zu realisieren. In diesen Bereich der OWASP-Checkliste fällt beispielsweise, ein RACI-Diagramm zu erstellen, dass die KI-Initiativen eines Unternehmens dokumentiert, Verantwortlichkeiten zuweist und unternehmensweite Richtlinien und Prozesse etabliert. Rechtliches Die rechtlichen Auswirkungen von KI sollten keinesfalls unterschätzt werden – sie entwickeln sich rasant weiter und können Reputation und finanziellem Gefüge potenziell beträchtliche Schäden zufügen. In diesen Bereich können diverse Aspekte fallen – zum Beispiel: Produktgarantien im Zusammenhang mit KI, KI-EULAs oder Intellectual-Property-Risiken. Kurzum: Ziehen Sie Ihr Legal-Team oder entsprechende Experten hinzu, um die verschiedenen rechtsbezogenen Aktivitäten zu identifizieren, die für Ihr Unternehmen relevant sind. Regulatorisches Aufbauend auf den juristischen Diskussionen entwickeln sich auch die regulatorischen Vorschriften schnell weiter – ein Beispiel ist der AI Act der EU. Unternehmen sollten deshalb die für sie geltenden KI-Compliance-Anforderungen ermitteln. LLM-Lösungen nutzen oder implementieren Der Einsatz von LLM-Lösungen erfordert spezifische Risiko- und Kontrollüberlegungen. Die OWASP-Checkliste nennt in diesem Bereich unter anderem die Aspekte: Access Control umsetzen, KI-Trainings-Pipelines absichern, Daten-Workflows mappen und bestehende oder potenzielle Schwachstellen in LLMs und Lieferketten identifizieren. Darüber hinaus sind kontinuierliche Audits durch Dritte, Penetrationstests und auch Code-Reviews für Zulieferer empfehlenswert. Testing, Evaluierung, Verifizierung, Validierung (TEVV) Der TEVV-Prozess wird vom NIST in seinem AI Framework ausdrücklich empfohlen. Dieser beinhaltet: Continuous Testing, Evaluierungen, Verifizierungen und Validierungen sowie Kennzahlen zu Funktionalität, Sicherheit und Zuverlässigkeit von KI-Modellen. Und zwar über den gesamten Lebenszyklus von KI-Modellen hinweg. Modell- und Risikokarten Für den ethischen Einsatz von großen Sprachmodellen sieht die OWASP-Checkliste Modell- und Risiko-“Karten” vor. Diese können den Nutzern Verständnis über KI-Systeme vermitteln und so das Vertrauen in die Systeme stärken. Zudem ermöglichen sie, potenziell negative Begleiterscheinungen wie Bias oder Datenschutzprobleme offen zu thematisieren. Die Karten können Details zu KI-Modellen, Architektur, Trainingsmethoden und Performance-Metriken beinhalten. Ein weiterer Schwerpunkt liegt dabei auf Responsible AI und allen Fragen in Zusammenhang mit Fairness und Transparenz. Retrieval Augmented Generation Retrieval Augmented Generation (RAG) ist eine Möglichkeit, die Fähigkeiten von LLMs zu optimieren, wenn es darum geht, relevante Daten aus bestimmten Quellen abzurufen. Dazu gehört, vortrainierte Modelle zu optimieren und bestehende auf neuen Datensätzen erneut zu trainieren, um ihre Leistung zu optimieren. OWASP empfiehlt, RAG zu implementieren, um den Mehrwert und die Effektivität großer Sprachmodelle im Unternehmenseinsatz zu maximieren. KI-Red-Teaming Last, but not least empfehlen die OWASP-Experten auch, KI-Red-Teaming-Sessions abzuhalten. Dabei werden Angriffe auf KI-Systeme simuliert, um Schwachstellen zu identifizieren und existierende Kontroll- und Abwehrmaßnahmen zu validieren. OWASP betont dabei, dass Red Teaming für sich alleine keine umfassende Lösung respektive Methode darstellt, um Generative AI und LLMs abzusichern. Vielmehr sollte KI-Red-Teaming in einen umfassenderen Ansatz eingebettet werden. Essenziell ist dabei jedoch laut den Experten insbesondere, dass im Unternehmen Klarheit darüber herrscht, wie die Anforderungen für Red Teaming aussehen sollten. Ansonsten sind Verstöße gegen Richtlinien oder gar juristischer Ärger vorprogrammiert. (fm) Sie wollen weitere interessante Beiträge rund um das Thema IT-Sicherheit lesen? Unser kostenloser Newsletter liefert Ihnen alles, was Sicherheitsentscheider und -experten wissen sollten, direkt in Ihre Inbox.
Apache Tika hit by critical vulnerability thought to be patched months ago
A security flaw in the widely-used Apache Tika XML document extraction utility, originally made public last summer, is wider in scope and more serious than first thought, the project’s maintainers have warned. Their new alert relates to two entwined flaws, the first CVE-2025-54988 from August, rated 8.4 in severity, and the second, CVE-2025-66516 made public last week, rated 10. CVE-2025-54988 is a weakness in the tika-parser-pdf-module used to process PDFs in Apache Tika from version 1.13 to and including version 3.2.1.  It is one module in Tika’s wider ecosystem that is used to normalize data from 1,000 proprietary formats so that software tools can index and read them. Unfortunately, that same document processing capability makes the software a prime target for campaigns using XML External Entity (XXE) injection attacks, a recurring issue in this class of utility. In the case of CVE-2025-54988, this could have allowed an attacker to execute an External Entity (XXE) injection attack by hiding XML Forms Architecture (XFA) instructions inside a malicious PDF. Through this, “an attacker may be able to read sensitive data or trigger malicious requests to internal resources or third-party servers,” said the CVE. Attackers could exploit the flaw to retrieve data from the tool’s document processing pipeline, exfiltrating it via Tika’s processing of the malicious PDF. CVE superset The maintainers have now realized that the XXE injection flaw is not limited to this module. It affects additional Tika components, namely Apache Tika tika-core, versions 1.13 to 3.2.1, and tika-parsers versions 1.13 to 1.28.5. In addition, legacy Tika parsers versions 1.13 to 1.28.5 are also affected. Unusually – and confusingly – this means there are now two CVEs for the same issue, with the second, CVE-2025-66516, a superset of the first. Presumably, the reasoning behind issuing a second CVE is that it draws attention to the fact that people who patched CVE-2025-54988 are still at risk because of the additional vulnerable components listed in CVE-2025-66516. So far, there’s no evidence that the XXE injection weakness in these CVEs is being exploited by attackers in the wild. However, the risk is that this will quickly change should the vulnerability be reverse engineered or proofs-of-concept appear. CVE-2025-66516 is rated an unusual maximum 10.0 in severity, which makes patching it a priority for anyone using this software in their environment. Users should update to Tika-core version 3.2.2, tika-parser-pdf-module version 3.2.2 (standalone PDF module), or tika-parsers versions 2.0.0 if on legacy. However, patching will only help developers looking after applications known to be using Apache Tika. The danger is that its use might not be listed in all application configuration files, creating a blind spot whereby its use is not picked up. The only mitigation against this uncertainty would be for developers to turn off the XML parsing capability in their applications via the tika-config.xml configuration file.
Experts Confirm JS#SMUGGLER Uses Compromised Sites to Deploy NetSupport RAT
Cybersecurity researchers are calling attention to a new campaign dubbed JS#SMUGGLER that has been observed leveraging compromised websites as a distribution vector for a remote access trojan named NetSupport RAT. The attack chain, analyzed by Securonix, involves three main moving parts: An obfuscated JavaScript loader injected into a website, an HTML Application (HTA) that runs encrypted
⚡ Weekly Recap: USB Malware, React2Shell, WhatsApp Worms, AI IDE Bugs & More
It’s been a week of chaos in code and calm in headlines. A bug that broke the internet’s favorite framework, hackers chasing AI tools, fake apps stealing cash, and record-breaking cyberattacks — all within days. If you blink, you’ll miss how fast the threat map is changing. New flaws are being found, published, and exploited in hours instead of weeks. AI-powered tools meant to help developers
How Can Retailers Cyber-Prepare for the Most Vulnerable Time of the Year?
The holiday season compresses risk into a short, high-stakes window. Systems run hot, teams run lean, and attackers time automated campaigns to get maximum return. Multiple industry threat reports show that bot-driven fraud, credential stuffing and account takeover attempts intensify around peak shopping events, especially the weeks around Black Friday and Christmas.  Why holiday peaks
Android Malware FvncBot, SeedSnatcher, and ClayRat Gain Stronger Data Theft Features
Cybersecurity researchers have disclosed details of two new Android malware families dubbed FvncBot and SeedSnatcher, as another upgraded version of ClayRat has been spotted in the wild. The findings come from Intel 471, CYFIRMA, and Zimperium, respectively. FvncBot, which masquerades as a security app developed by mBank, targets mobile banking users in Poland. What's notable about the malware
Sneeit WordPress RCE Exploited in the Wild While ICTBroadcast Bug Fuels Frost Botnet Attacks
A critical security flaw in the Sneeit Framework plugin for WordPress is being actively exploited in the wild, per data from Wordfence. The remote code execution vulnerability in question is CVE-2025-6389 (CVSS score: 9.8), which affects all versions of the plugin prior to and including 8.3. It has been patched in version 8.4, released on August 5, 2025. The plugin has more than 1,700 active
MuddyWater Deploys UDPGangster Backdoor in Targeted Turkey-Israel-Azerbaijan Campaign
The Iranian hacking group known as MuddyWater has been observed leveraging a new backdoor dubbed UDPGangster that uses the User Datagram Protocol (UDP) for command-and-control (C2) purposes. The cyber espionage activity targeted users in Turkey, Israel, and Azerbaijan, according to a report from Fortinet FortiGuard Labs. "This malware enables remote control of compromised systems by allowing
Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks
Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA). They affect popular
Critical React2Shell Flaw Added to CISA KEV After Confirmed Active Exploitation
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Friday formally added a critical security flaw impacting React Server Components (RSC) to its Known Exploited Vulnerabilities (KEV) catalog following reports of active exploitation in the wild. The vulnerability, CVE-2025-55182 (CVSS score: 10.0), relates to a case of remote code execution that could be triggered by an
New Prompt Injection Attack Vectors Through MCP Sampling
Model Context Protocol connects LLM apps to external data sources or tools. We examine its security implications through various attack vectors. The post New Prompt Injection Attack Vectors Through MCP Sampling appeared first on Unit 42.
Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails
A new agentic browser attack targeting Perplexity's Comet browser that's capable of turning a seemingly innocuous email into a destructive action that wipes a user's entire Google Drive contents, findings from Straiker STAR Labs show. The zero-click Google Drive Wiper technique hinges on connecting the browser to services like Gmail and Google Drive to automate routine tasks by granting them
Critical XXE Bug CVE-2025-66516 (CVSS 10.0) Hits Apache Tika, Requires Urgent Patch
A critical security flaw has been disclosed in Apache Tika that could result in an XML external entity (XXE) injection attack. The vulnerability, tracked as CVE-2025-66516, is rated 10.0 on the CVSS scoring scale, indicating maximum severity. "Critical XXE in Apache Tika tika-core (1.13-3.2.1), tika-pdf-module (2.0.0-3.2.1) and tika-parsers (1.13-1.28.5) modules on all platforms allows an
Chinese Hackers Have Started Exploiting the Newly Disclosed React2Shell Vulnerability
Two hacking groups with ties to China have been observed weaponizing the newly disclosed security flaw in React Server Components (RSC) within hours of it becoming public knowledge. The vulnerability in question is CVE-2025-55182 (CVSS score: 10.0), aka React2Shell, which allows unauthenticated remote code execution. It has been addressed in React versions 19.0.1, 19.1.2, and 19.2.1. According
Last updated: 2025-12-09 04:25:17 | Next auto-update in: 15:00