Anthropic’s Mythos AI triggered global cybersecurity alarms, raising fears over access and safety of powerful language models. Founded in 2021, it’s a key player in AI safety research.
As of early March 2026, President Trump hosted major tech companies including Google, Microsoft, Meta, and OpenAI to sign a voluntary 'ratepayer protection pledge.' The pledge commits these firms to build or buy their own power generation for AI data centers to prevent electricity price hikes for consumers amid surging energy demand. Experts remain skeptical about the pledge's enforceability and impact on rising utility costs.
As of March 13, 2026, the Strait of Hormuz remains effectively closed due to ongoing conflict between the US, Israel, and Iran. Iranian missile and drone attacks, alongside US and Israeli strikes, have halted tanker traffic through this vital waterway, which carries about 20% of global oil. The closure has caused surging oil prices, soaring insurance costs, and widespread shipping disruptions, with major powers considering naval escorts to reopen the route.
UK authors and artists protest proposed copyright changes allowing AI firms to use protected works without permission. Campaigns include publishing an 'empty' book and calling for licensing reforms, amid government consultations and industry outrage over potential impacts on creative livelihoods.
Leaders like BlackRock's Larry Fink warn that AI's growth could deepen economic inequality, benefiting a few large companies and investors. Concerns about a potential bubble and market risks are rising as AI investments surge, with new startups like LeCun's AMI Labs aiming to develop more advanced AI systems.
As of March 13, 2026, Meta has delayed the launch of its new AI model, Avocado, to May after internal tests showed it underperformed compared to Google's latest Gemini 3.0. Meanwhile, Meta acquired Moltbook, a social platform for AI agents, integrating its founders into Meta's AI research division to advance AI agent technology.
The Pentagon designated Anthropic a supply chain risk, leading to legal battles and industry concern. Microsoft and other tech giants oppose the move, citing potential harm to AI development and national security. The dispute centers on AI's use in military and surveillance applications.
The Trump administration has revoked its partnership with Anthropic, citing concerns over AI safety and ideological differences. Anthropic is suing the government, which has labeled it a supply chain risk. The dispute highlights tensions over AI's military use and regulatory control.
Illinois's 2026 primaries feature heavy spending from AI and crypto industries, influencing key races including the Senate and House. Candidates' positions on regulation and campaign finance are central, with outside groups spending nearly $20 million. The results will shape the state's political landscape and signal industry influence.
The US and Israel have intensified their military campaign against Iran, with AI tools aiding data processing and targeting decisions. Human oversight remains, but concerns grow over AI's influence on war crimes and strategic choices amid ongoing conflict and political debates.
In March 2026, AI and cryptocurrency industries spent nearly $20 million in Illinois primaries to influence candidates' stances on regulation. Lt. Gov. Juliana Stratton, backed by Gov. JB Pritzker, won the Senate primary despite opposition from crypto-backed super PACs. Rival AI super PACs Leading the Future and Public First spent millions supporting opposing candidates nationwide, signaling growing tech industry political ambitions ahead of the 2026 midterms.
OpenAI promotes its AI safety policies and future vision, but internal reports and interviews reveal concerns about leadership trustworthiness, safety environment, and industry competition. The story highlights tensions between public optimism and internal skepticism, with implications for AI regulation and societal impact.
California's governor signed an executive order requiring AI companies to implement safety, privacy, and bias mitigation measures for state contracts. The move challenges federal efforts to limit regulation, emphasizing public safety and transparency in AI development.
Anthropic has released the Mythos model to a limited group of firms under Project Glasswing and has warned it can find thousands of software vulnerabilities faster than humans. Regulators and finance leaders in the US, UK, EU and Canada have convened urgent meetings, wargames and briefings to assess risks and coordinate defensive access and rules.
Snap has announced it is cutting 1,000 jobs, representing 16% of its workforce, citing rapid AI development. The company aims to reduce costs by over $500 million and improve profitability, with layoffs affecting mainly North American staff. The move follows similar layoffs across the tech sector driven by AI integration.
Recent reporting has shown the Iran war has significantly drained US missile and interceptor stockpiles, forcing the Pentagon to reallocate munitions from other regions and ask Congress for emergency funding. At the same time, militaries are increasing investment in low-cost drones, counter-drone systems and battlefield robots — including Ukrainian systems and US-funded autonomous drone programs.
A man has been charged with attempting to kill OpenAI CEO Sam Altman by throwing a Molotov cocktail at his home and trying to set the headquarters on fire. The suspect, from Texas, is facing federal and state charges amid rising tensions over AI safety and activism. The attack follows increased threats and protests against AI leaders.
The White House has issued a memo saying foreign actors, principally based in China, have been running industrial-scale campaigns to "distil" US frontier AI systems by using proxy accounts and jailbreaking techniques to extract capabilities. The administration has said it will share intelligence with US AI firms and explore measures to punish offenders ahead of a planned US–China summit.
The Pentagon has requested a dramatic funding surge for autonomous drone warfare and AI-enabled systems in the 2027 budget, with the Defense Autonomous Warfare Group (DAWG) receiving tens of billions to expand drone production, counter-drone capabilities, and autonomous testing with industry partners. The move follows a broader push to recalibrate defense priorities amid global drone competition and domestic manufacturing concerns.