Tea

joined 1 week ago
 

Executive Summary

Rapid advances in AI are poised to reshape nearly every aspect of society. Governments see in these dual-use AI systems a means to military dominance, stoking a bitter race to maximize AI capabilities. Voluntary industry pauses or attempts to exclude government involvement cannot change this reality. These systems that can streamline research and bolster economic output can also be turned to destructive ends, enabling rogue actors to engineer bioweapons and hack critical infrastructure. “Superintelligent” AI surpassing humans in nearly every domain would amount to the most precarious technological development since the nuclear bomb. Given the stakes, superintelligence is inescapably a matter of national security, and an effective superintelligence strategy should draw from a long history of national security policy.

Deterrence

A race for AI-enabled dominance endangers all states. If, in a hurried bid for superiority, one state inadvertently loses control of its AI, it jeopardizes the security of all states. Alternatively, if the same state succeeds in producing and controlling a highly capable AI, it likewise poses a direct threat to the survival of its peers. In either event, states seeking to secure their own survival may preventively sabotage competing AI projects. A state could try to disrupt such an AI project with interventions ranging from covert operations that degrade training runs to physical damage that disables AI infrastructure. Thus, we are already approaching a dynamic similar to nuclear Mutual Assured Destruction (MAD), in which no power dares attempt an outright grab for strategic monopoly, as any such effort would invite a debilitating response. This strategic condition, which we refer to as Mutual Assured AI Malfunction (MAIM), represents a potentially stable deterrence regime, but maintaining it could require care. We outline measures to maintain the conditions for MAIM, including clearly communicated escalation ladders, placement of AI infrastructure far from population centers, transparency into datacenters, and more.

Nonproliferation

While deterrence through MAIM constrains the intent of superpowers, all nations have an interest in limiting the AI capabilities of terrorists. Drawing on nonproliferation precedents for weapons of mass destruction (WMDs), we outline three levers for achieving this. Mirroring measures to restrict key inputs to WMDs such as fissile material and chemical weapons precursors, compute security involves knowing reliably where high-end AI chips are and stemming smuggling to rogue actors. Monitoring shipments, tracking chip inventories, and employing security features like geolocation can help states account for them. States must prioritize information security to protect the model weights underlying the most advanced AI systems from falling into the hands of rogue actors, similar to controls on other sensitive information. Finally, akin to screening protocols for DNA synthesis services to detect and refuse orders for known pathogens, AI companies can be incentivized to implement technical AI security measures that detect and prevent malicious use.

Competitiveness

Beyond securing their survival, states will have an interest in harnessing AI to bolster their competitiveness, as successful AI adoption will be a determining factor in national strength. Adopting AI-enabled weapons and carefully integrating AI into command and control is increasingly essential for military strength. Recognizing that economic security is crucial for national security, domestic capacity for manufacturing high-end AI chips will ensure a resilient supply and sidestep geopolitical risks in Taiwan. Robust legal frameworks governing AI agents can set basic constraints on their behavior that follow the spirit of existing law. Finally, governments can maintain political stability through measures that improve the quality of decision-making and combat the disruptive effects of rapid automation.

By detecting and deterring destabilizing AI projects through intelligence operations and targeted disruption, restricting access to AI chips and capabilities for malicious actors through strict controls, and guaranteeing a stable AI supply chain by investing in domestic chip manufacturing, states can safeguard their security while opening the door to unprecedented prosperity.

 

Executive Summary

Rapid advances in AI are poised to reshape nearly every aspect of society. Governments see in these dual-use AI systems a means to military dominance, stoking a bitter race to maximize AI capabilities. Voluntary industry pauses or attempts to exclude government involvement cannot change this reality. These systems that can streamline research and bolster economic output can also be turned to destructive ends, enabling rogue actors to engineer bioweapons and hack critical infrastructure. “Superintelligent” AI surpassing humans in nearly every domain would amount to the most precarious technological development since the nuclear bomb. Given the stakes, superintelligence is inescapably a matter of national security, and an effective superintelligence strategy should draw from a long history of national security policy.

Deterrence

A race for AI-enabled dominance endangers all states. If, in a hurried bid for superiority, one state inadvertently loses control of its AI, it jeopardizes the security of all states. Alternatively, if the same state succeeds in producing and controlling a highly capable AI, it likewise poses a direct threat to the survival of its peers. In either event, states seeking to secure their own survival may preventively sabotage competing AI projects. A state could try to disrupt such an AI project with interventions ranging from covert operations that degrade training runs to physical damage that disables AI infrastructure. Thus, we are already approaching a dynamic similar to nuclear Mutual Assured Destruction (MAD), in which no power dares attempt an outright grab for strategic monopoly, as any such effort would invite a debilitating response. This strategic condition, which we refer to as Mutual Assured AI Malfunction (MAIM), represents a potentially stable deterrence regime, but maintaining it could require care. We outline measures to maintain the conditions for MAIM, including clearly communicated escalation ladders, placement of AI infrastructure far from population centers, transparency into datacenters, and more.

Nonproliferation

While deterrence through MAIM constrains the intent of superpowers, all nations have an interest in limiting the AI capabilities of terrorists. Drawing on nonproliferation precedents for weapons of mass destruction (WMDs), we outline three levers for achieving this. Mirroring measures to restrict key inputs to WMDs such as fissile material and chemical weapons precursors, compute security involves knowing reliably where high-end AI chips are and stemming smuggling to rogue actors. Monitoring shipments, tracking chip inventories, and employing security features like geolocation can help states account for them. States must prioritize information security to protect the model weights underlying the most advanced AI systems from falling into the hands of rogue actors, similar to controls on other sensitive information. Finally, akin to screening protocols for DNA synthesis services to detect and refuse orders for known pathogens, AI companies can be incentivized to implement technical AI security measures that detect and prevent malicious use.

Competitiveness

Beyond securing their survival, states will have an interest in harnessing AI to bolster their competitiveness, as successful AI adoption will be a determining factor in national strength. Adopting AI-enabled weapons and carefully integrating AI into command and control is increasingly essential for military strength. Recognizing that economic security is crucial for national security, domestic capacity for manufacturing high-end AI chips will ensure a resilient supply and sidestep geopolitical risks in Taiwan. Robust legal frameworks governing AI agents can set basic constraints on their behavior that follow the spirit of existing law. Finally, governments can maintain political stability through measures that improve the quality of decision-making and combat the disruptive effects of rapid automation.

By detecting and deterring destabilizing AI projects through intelligence operations and targeted disruption, restricting access to AI chips and capabilities for malicious actors through strict controls, and guaranteeing a stable AI supply chain by investing in domestic chip manufacturing, states can safeguard their security while opening the door to unprecedented prosperity.

 
  • Reddit has begun issuing warnings to users to regularly upvote violent content with a view to taking harsher action in future.
  • The company says that it will consider expanding this action to other forms of content in future.
  • Users are concerned that this moderation tactic could be abused or just improperly implemented.
 

The American military has signed a deal with Scale AI to give artificial intelligence, as far as we can tell, its most prominent role in the Western defense sector to date – with AI agents to now be used in planning and operations.

 

I guess AI is really gonna replace a lot of people jobs by the end of 2025...

Fuck.

 

A single DMCA anti-circumvention notice, sent by Nintendo on the one-year anniversary of its 2024 lawsuit against Yuzu, showed just how much things can change in a year. Targeting nine repos linked to Switch emulator Ryujinx, the domino effect led to the removal of 4,238 repos. Elsewhere, the distilled components of Yuzu's demise can be found in recent takedown notices

 

A group of Apple Watch buyers have filed a lawsuit in Silicon Valley accusing the tech giant of exaggerating how environmentally friendly production of the smart wristwear is.

 

Executive Summary.

HUMAN’s Satori Threat Intelligence and research team has uncovered and—in collaboration with Google, Trend Micro, Shadowserver, and other partners—partially disrupted a sprawling and complex cyberattack dubbed BADBOX 2.0. BADBOX 2.0 is a major adaptation and expansion of the Satori team’s 2023 BADBOX disclosure, and is the largest botnet made up of infected connected TV (CTV) devices ever uncovered. (BADBOX had a portion of its infrastructure taken down by the German government in December 2024.) The BADBOX 2.0 investigation reflects how the threat actors have shifted their targets and tactics following the BADBOX disruption in 2023.

This attack centered primarily on low-cost, ‘off-brand’ and uncertified Android Open Source Project devices with a backdoor. These backdoored devices allowed the threat actors the access to launch fraud schemes of several kinds, including the following:

  • Residential proxy services: selling access to the device’s IP address without the user’s permission
  • Ad fraud – hidden ad units: using built-in content apps to render hidden ads
  • Ad fraud – hidden WebViews: launching hidden browser windows that navigate to a collection of game sites owned by the threat actors
  • Click fraud: navigating an infected device to a low-quality domain and clicking on an ad present on the page

While HUMAN and its partners currently observe the threat actors pushing payloads to the device to implement these fraud schemes, the attackers are not limited to just these 4 types of fraud. These threat actors have the technical capability to push any functionality they want to the device by loading and executing an APK file of their choosing, or by requesting the device to execute code. For example, researchers at Trend Micro who collaborated on this investigation with HUMAN observed one of the threat actor groups (Lemon Group) deploying payloads to programmatically create accounts in online services, collect sensitive data from devices and more.

The backdoor underpinning the BADBOX 2.0 operation is distributed in three ways:

  • pre-installed on the device, in a similar fashion to the primary BADBOX backdoor
  • retrieved from a command-and-control (C2) server contacted by the device on first boot
  • downloaded from third-party marketplaces by unsuspecting users

~Diagram outlining the three backdoor delivery mechanisms for BADBOX 2.0~

Satori researchers identified four threat actor groups involved in BADBOX 2.0:

  • SalesTracker Group—so named by HUMAN for a module used by the group to monitor infected devices—is the group researchers believe is responsible for the BADBOX operation, and that staged and managed the C2 infrastructure for BADBOX 2.0.
  • MoYu Group—so named by HUMAN based on the name of residential proxy services offered by the threat actors based on BADBOX 2.0-infected devices—developed the backdoor for BADBOX 2.0, coordinated the variants of that backdoor and the devices on which they would be installed, operated a botnet composed of a subset of BADBOX 2.0-infected devices, operated a click fraud campaign, and staged the capabilities to run a programmatic ad fraud campaign.
  • Lemon Group, a threat actor group first reported by Trend Micro, is connected to the residential proxy services created through the BADBOX operation, and is connected to an ad fraud campaign across a network of HTML5 (H5) game websites using BADBOX 2.0-infected devices.
  • LongTV is a brand run by a Malaysian internet and media company, which operates connected TV (CTV) devices, and develops apps for those devices and for other Android Open Source Project devices. Several LongTV-developed apps are responsible for an ad fraud campaign centered on hidden ads based on an “evil twin” technique as described by Satori researchers in the 2024 Konfety disclosure. (This technique centers on malicious apps distributed through non official channels representing themselves as similar benign apps distributed through official channels which share a package name.)

These groups were connected to one another through shared infrastructure (common C2 servers) and historical and current business ties.

Satori researchers discovered BADBOX 2.0 while monitoring the remaining BADBOX infrastructure for adaptation following its disruption; as a matter of course, Satori researchers keep an eye on threats long after they’re first disrupted. In the case of BADBOX 2.0, researchers had been watching the threat actors for more than a year between the first BADBOX disclosure and BADBOX 2.0.

Researchers found new C2 servers which hosted a list of APKs targeting Android Open Source Project devices similar to those impacted by BADBOX. Pulling on those threads led the researchers to find the various threats on each device.Through collaboration with Google, Trend Micro, Shadowserver, and other HUMAN partners, BADBOX 2.0 has been partially disrupted.

 

Members of the Alliance for Creativity and Entertainment (ACE) filed two separate copyright infringement lawsuits yesterday, targeting the alleged operators of IPTV services including 'Outer Limits IPTV', 'Shrugs' and 'Zing'. Amazon, Netflix and several major Hollywood studios demand an end to the infringing activity and an award for damages, which could run to millions of dollars.

view more: ‹ prev next ›