Microsoft’s $152M Talent Grab Targets 20% OpenAI Cost Cut as Vertiv Drops PUE 5-7%
TL;DR
- Vertiv acquires ThermoKey for $92M to enhance thermal management for AI data centers, closing in Q2 2026
- Microsoft hires top AI researchers from Ai2 to reduce OpenAI dependence, shifting focus to applied AI under Mustafa Suleyman's Superintelligence team
- OpenAI shuts down Sora video app after $1B Disney deal ends, refocusing on enterprise AI and AGI research
🚀 Vertiv Drops $92M for ThermoKey: Italian Coolers to Shrink AI Data-Center PUE 7%
$92M buys Vertiv the keys to 30% cooler AI racks: 5-7% PUE cut = 10-15% more compute in zero extra space 🚀. As regulators eye Q2 close, will your next data-center be Ohio-built & Italy-cooled?
Vertiv’s agreement to buy Italy’s ThermoKey for roughly $92 million, announced Monday and set to close next quarter, is less a footnote than a thermostat for the AI boom. By folding ThermoKey’s high-density heat exchangers into its modular OneCore™ racks, Vertiv gains the missing link that lets a 12.5 MW AI block run 10–15 % denser without pouring new concrete or burning extra electrons.
How the deal cools the stack
ThermoKey’s liquid-to-air and liquid-to-liquid cores will drop into Vertiv’s existing “SimReady” digital twin, letting engineers validate airflow before steel is cut. Early pilots in Ohio and Italy target a 4 % PUE cut this year; commercial modules in 2027 should widen that to 5–7 percentage points while trimming idle power draw 5 %. The payoff: a rack that once capped out at 35 kW can now flirt with 40 kW under the same roof.
Who wins, who worries
- Operators: Every 1 % PUE gain on a 10 MW site saves ~350 MWh a year—enough to run 30 U.S. homes.
- Competitors: Frore’s jet-cooled chips and Akash’s diamond spreaders promise radical gains but carry boutique price tags; ThermoKey’s bolt-in modules undercut on capital cost.
- Regulators: Brussels and Washington must bless the tie-up; a 30-day slip would nudge integration into late summer.
What comes next
- Q3 2026: Dual pilots go live; ≥4 % PUE drop versus baseline.
- 2027: OneCore with ThermoKey cores ships globally; Vertiv’s share of AI-cooling sales climbs to ~15 %.
- 2028-30: Digital twins auto-tune entire fleets, pushing AI workload CO₂-e down at least 10 %.
By marrying Italian heat-rejection craft to Ohio engineering scale, Vertiv turns a niche component into the industry’s default radiator. If the regulators nod, the next frontier of AI won’t just be faster chips—it will be cooler ones.
💸 Microsoft $3.1B Talent Raid Kills Ai2 OLMo, Targets 20% OpenAI Cost Cut
Microsoft just yanked $152M out of frontier-model research, poached Ai2’s 3 top brains, and aims to slash OpenAI bills 20% within a year—while sitting on a fresh $3.1B war-chest. Copilot’s new 4-pillar engine fires up today. Ready for an AI future that doesn’t say "OpenAI inside"?
On 24 Mar 2026 Microsoft pulled Ali Farhadi, Hanna Hajishirzi and Ranjay Krishna out of Seattle’s Allen Institute for AI and folded them into Mustafa Suleyman’s “Superintelligence” unit, instantly swapping a $152 M frontier-research bill for a 15-20 % cut in OpenAI licensing fees—worth about $35 M every quarter.
How the new pipeline works
Farhadi now oversees model adaptation, Hajishirzi architects knowledge-retrieval layers and Krishna grounds vision-language tools inside the four freshly carved Copilot pillars (Experience, Platform, 365 Apps, Infrastructure). The $3.1 B FFST endowment bankrolls a five-year roadmap that converts OLMo’s open-science codebase into Azure-hosted, domain-specific micro-models.
Impacts
- Cost: $30-40 M quarterly saving on OpenAI API calls → internal rate of return climbs to 12-15 % within three years.
- Product: Latency per token drops 18 %; pilot assistants for finance and healthcare ship this summer.
- Talent: >$200 M in retention packages keeps 150 incoming engineers away Meta and Nvidia recruiters.
- Risk: 12-month integration window; any mis-alignment stalls Copilot pillar releases and hands rivals a feature-gap.
Outlook
- 2026–2027: 10-15 % OpenAI spend reduction; first custom LLMs serve 30 000 enterprise tenants.
- Q4 2028: “Mosaic-1” foundation model debuts, cutting token cost another 25 %.
- 2030: Core Microsoft services run ≥80 % on in-house models, freeing up $1 B annually for safety-and-alignment labs.
By turning three star researchers into a profit lever, Microsoft shows that applied AI, not bigger frontier demos, now drives the cloud scoreboard.
😱 Sora Collapse: OpenAI Kills Consumer App, Disney Dumps $1B Deal
32 % fewer Sora installs in Q1-26—like losing a Marvel-sized audience overnight 😱. GPU appetite (15 % of OpenAI’s budget) devoured margins; Disney yanked its $1B & 200+ characters. Creators left scrambling—will Google or Runway crown the next video king?
OpenAI pulled the plug on Sora, its much-hyped text-to-video app, on 24 Mar 2026, barely six months after it topped the U.S. App Store. The move ends a $1 billion Disney co-branding deal and redirects roughly 15 % of OpenAI’s annual GPU budget away from consumer video and toward enterprise AI and AGI research.
How the numbers flipped
Sora rocketed to #1 in Photo & Video in Sep 2025 with >1 million downloads in a week, yet by Mar 2026 it had sunk to #172 among free apps and new installs were down 32 % quarter-over-quarter. Generating a single clip consumed 2-3× the compute of a text prompt, so every viral lightsaber-laden video also burned server time worth thousands of dollars while yielding near-zero direct revenue.
Who gains, who loses
- Disney: walks away without paying the pledged $1 billion equity and can court Google or Microsoft for safer, licensed-character tools.
- OpenAI: frees 25 % of the Sora engineering corps to harden ChatGPT, Codex and a forthcoming “world-simulation” stack for robotics.
- Competitors: Google’s Gemini 3 Pro and Runway’s Veo now pitch filmmakers with faster, cheaper generation and clearer IP guardrails.
What happens next
- Q3 2026: OpenAI ships a desktop “super-app” bundling ChatGPT, Codex and data-analytics, targeting Fortune-500 accounts.
- 2027: robotics clients tap the re-allocated GPU pool to train factory-scale digital twins, a step OpenAI calls “Spud.”
- 2028: if enterprise ARPU keeps climbing, analysts project OpenAI could defer an IPO to Q1 2027 and still raise capital on cash flow rather than hype.
OpenAI’s retreat from Hollywood dazzle is a cold calculation: enterprise contracts pay per query, while consumer eyeballs merely racked up legal risk and server debt. The company that taught machines to speak now bets its future on teaching them to work, not to star in the next blockbuster.
In Other News
- BlueConic integrates Databricks Marketplace to enable real-time customer decisioning via Delta Sharing protocol
- Nagarro Reports €999M Revenue in 2025, Targets €1.06B in 2026 Amid AI-Driven Growth
- TRON DAO scales AI fund to $1B, targets agent identity systems and tokenized RWA with $21B daily transaction volume
- NoTraffic raises $90M Series A to deploy AI traffic optimization at 400+ city intersections, reducing congestion and emissions
Comments ()