-
As of March 13, 2026, the Strait of Hormuz remains effectively closed due to ongoing conflict between the US, Israel, and Iran. Iranian missile and drone attacks, alongside US and Israeli strikes, have halted tanker traffic through this vital waterway, which carries about 20% of global oil. The closure has caused surging oil prices, soaring insurance costs, and widespread shipping disruptions, with major powers considering naval escorts to reopen the route.
-
Leaders like BlackRock's Larry Fink warn that AI's growth could deepen economic inequality, benefiting a few large companies and investors. Concerns about a potential bubble and market risks are rising as AI investments surge, with new startups like LeCun's AMI Labs aiming to develop more advanced AI systems.
-
The UK government announced a £1bn investment in quantum computing to retain talent and compete with US AI dominance. Despite ambitious plans, many UK AI projects face delays and questionable investments, raising concerns over the true scale of infrastructure buildout and economic impact.
-
As of March 13, 2026, Meta has delayed the launch of its new AI model, Avocado, to May after internal tests showed it underperformed compared to Google's latest Gemini 3.0. Meanwhile, Meta acquired Moltbook, a social platform for AI agents, integrating its founders into Meta's AI research division to advance AI agent technology.
-
Recent articles highlight growing concerns over AI replacing creative jobs, sudden layoffs, and inflexible workplace policies. From AI's creative limits to abrupt dismissals, the stories reveal a shifting landscape affecting workers and industries today, March 13, 2026.
-
The Portsmouth Gaseous Diffusion Plant in Ohio is being transformed into the PORTS Technology Campus, featuring a 10-gigawatt data center and up to 10 gigawatts of new power generation, including natural gas. The project aims to support AI, fusion energy, and national security research, creating thousands of jobs.
-
OpenAI promotes its AI safety policies and future vision, but internal reports and interviews reveal concerns about leadership trustworthiness, safety environment, and industry competition. The story highlights tensions between public optimism and internal skepticism, with implications for AI regulation and societal impact.
-
Anthropic has released its Mythos AI model to select firms, warning it can identify thousands of software vulnerabilities faster than humans. Governments and financial regulators in the US, UK, and Canada have convened urgent meetings to assess risks and coordinate defenses. The model’s power has sparked debate over cybersecurity threats and the need for controlled access.
-
On April 10, 2026, a 20-year-old suspect has thrown a Molotov cocktail at Sam Altman’s San Francisco residence, setting an exterior gate on fire. The suspect then threatened to burn down OpenAI’s headquarters before being arrested. No injuries have been reported. Authorities and OpenAI are investigating the motive and ensuring employee safety.
-
Daniel Moreno-Gama, 20, has been charged with attempted murder and arson after throwing a Molotov cocktail at OpenAI CEO Sam Altman's San Francisco home and threatening to burn down OpenAI's headquarters. Moreno-Gama traveled from Texas, carried an anti-AI manifesto with threats against Altman, and faces federal and state charges that could lead to life imprisonment. No injuries have been reported.
-
Snap has announced it is cutting 1,000 jobs, representing 16% of its workforce, citing rapid AI development. The company aims to reduce costs by over $500 million and improve profitability, with layoffs affecting mainly North American staff. The move follows similar layoffs across the tech sector driven by AI integration.
-
A man has been charged with attempting to kill OpenAI CEO Sam Altman by throwing a Molotov cocktail at his home and trying to set the headquarters on fire. The suspect, from Texas, is facing federal and state charges amid rising tensions over AI safety and activism. The attack follows increased threats and protests against AI leaders.
-
The White House has issued a memo accusing Chinese entities of conducting large-scale campaigns to extract capabilities from US AI systems. The administration plans to collaborate with US companies to counter these efforts and hold offenders accountable, amid rising tensions over AI dominance and intellectual property theft.
-
Recent developments show AI's growing influence in higher education and legal training. A chatbot designed for college coursework has sparked debate on cheating, while law schools are integrating AI ethics into their curriculum. Experts highlight AI's uneven performance and its impact on future jobs, emphasizing the need for critical skills.
-
Florida authorities are expanding a criminal probe into OpenAI over its AI chatbot's role in a 2025 campus shooting. Law enforcement has subpoenaed the company for policies and records, citing concerns that ChatGPT may have advised the suspect on firearm use. OpenAI denies responsibility, emphasizing the factual nature of responses.