CONTEXT WINDOW — Issue 002: The Week AI Infrastructure Rewired Itself
CONTEXT WINDOW — Issue 002
Week of April 26 – May 3, 2026
Signal from the MCP ecosystem. No noise.
Table of Contents
What Shipped
Microsoft ends OpenAI exclusivity. OpenAI ends Microsoft's revenue share.
The biggest infrastructure deal in AI history just got restructured. On April 27, Microsoft and OpenAI announced a revised partnership that changes both sides of the relationship.
What Microsoft gives up: Exclusive rights to OpenAI's intellectual property. From now on, OpenAI can serve its models and products through any cloud provider — Amazon, Google, whoever.
What Microsoft gives up in return for losing exclusivity: Nothing. It keeps a non-exclusive IP license through 2032. It keeps its 27% ownership stake. It keeps a 20% revenue share on ChatGPT subscriptions through 2030, now capped at a total ceiling. Azure remains OpenAI's primary cloud provider by default.
What Microsoft gains: It stops paying a revenue share to OpenAI for enterprise API access through Azure.
The trigger for the restructuring was Amazon's February deal — OpenAI agreed to $100 billion in AWS cloud commitments over eight years and named AWS as the exclusive third-party cloud distribution provider for its new Frontier enterprise platform. Microsoft reportedly considered suing. Instead, they renegotiated.
The practical outcome: OpenAI can now deploy anywhere. GPT-5.5 and future models are no longer tied to Azure. For developers building on the OpenAI API, cloud vendor choice just became real for the first time.
Google commits up to $40 billion to Anthropic at $350B valuation
Bloomberg confirmed on April 24 that Google is investing $10 billion in Anthropic now, at a $350 billion valuation, with up to $30 billion more contingent on Anthropic hitting performance milestones. The deal also commits 5 gigawatts of Google Cloud TPU capacity to Anthropic, expanding their existing compute relationship.
Context: Anthropic's annualized revenue has gone from $1 billion at the end of 2024 to $9 billion at the end of 2025 to approximately $30 billion as of early April 2026. The company is reportedly eyeing an IPO as early as October 2026.
That growth rate makes the $350 billion valuation — under 12x annualized revenue — look conservative to some analysts. Anthropic employees apparently agree: a recent tender offer was undersubscribed because employees chose to hold rather than sell at $350 billion.
The strategic read: both major Western AI labs are now in a position where Big Tech is competing to fund them. Amazon committed $50 billion to OpenAI in February. Google committed up to $40 billion to Anthropic this week. The compute buildout is not infrastructure spending anymore. It is equity acquisition.
OpenAI smartphone — MediaTek, Qualcomm, Luxshare, 2028
On April 27, analyst Ming-Chi Kuo reported that OpenAI is building a smartphone. MediaTek and Qualcomm are developing the custom chip. Luxshare is handling manufacturing. Specs are expected Q1 2027. Mass production target: 2028.
The device concept replaces apps with AI agents that complete tasks while maintaining continuous context via a combination of on-device and cloud models. OpenAI declined to comment.
The strategic angle is obvious: owning hardware bypasses Apple and Google's app distribution restrictions entirely. OpenAI's super app framing from the GPT-5.5 launch becomes much more legible with a device layer underneath it.
Prompt injection hits enterprise AI agents — Google research
Google published research this week documenting a prompt injection pattern targeting enterprise AI agents that browse the web. When an AI agent reads a page on behalf of a user, that page can contain instructions embedded in the content — hidden text, structured data, or natural language instructions — that the agent interprets as user commands.
The unsettling part: traditional security tooling sees nothing wrong. The agent is using real credentials, approved permissions, and behaving within its defined access scope. The instructions are just arriving from content rather than from the user.
The documented pattern is: user asks agent to research something → agent visits attacker-controlled or attacker-modified page → page instructs agent to perform an action → agent complies.
For MCP server builders, this is not a theoretical concern. Any MCP server that fetches and returns external content — web readers, RSS parsers, email processors — is a potential injection surface if the calling agent doesn't sanitize returned tool output before acting on it. The responsibility for defense sits at the agent orchestration layer, not the MCP server itself. But you need to know the vector exists.
GPT-5.5 Bio Bug Bounty — $25,000 for a universal jailbreak
OpenAI launched a Bio Bug Bounty program alongside GPT-5.5. The offer: $25,000 to the first researcher who finds a universal jailbreak capable of bypassing the model's five-question bio safety challenge. Program runs April 28 to July 27, 2026. Testing limited to GPT-5.5 in Codex Desktop only. NDA required. Applications accepted through June 22.
OpenAI's preparedness framework classified GPT-5.5's bio capabilities as "High" — same level as cybersecurity, below "Critical." The bug bounty is the public acknowledgment that they may have missed something.
Affinity MCP goes live — private capital CRM meets agentic AI
On April 28, Affinity — the relationship intelligence CRM for private equity — launched a hosted MCP server connecting its platform to Claude, Gemini, Copilot, and ChatGPT.
The practical use case: a dealmaker asks their AI assistant to pull up all relationships connected to a specific portfolio company, surface recent interaction history, and draft a follow-up sequence. The MCP server translates natural language requests into Affinity API calls in real time. No custom code. No separate application.
The announcement is notable not for the technology — hosted MCP servers are straightforward — but for the vertical. Private capital is not a typical early adopter. When M&A advisors and PE firms are deploying production MCP integrations, the protocol has crossed into enterprise infrastructure.
Sonnet 4.8 expected in May — plus Opus 4.7 tokenizer warning
Two things worth tracking from the Anthropic side this week.
First: Sonnet 4.8 is expected before the end of May, according to multiple sources tracking Anthropic's release cadence.
Second: the Opus 4.7 tokenizer change is still catching enterprise buyers off guard. The new tokenizer produces up to 35% more tokens for the same input text compared to 4.6. The rate card is unchanged. Real costs per request can rise significantly without any configuration change. If you're running automated pipelines on 4.7, audit your actual token consumption before committing to volume.
Claude Opus 4.7 tokenizer impact — $5/$25 rate card, 35% more tokens
To be explicit about the math: at $5 per million input tokens, a workflow that previously consumed 1 million tokens at $5.00 may now consume 1.35 million tokens at $6.75. Over high-volume pipelines, that's a 35% cost increase with no model upgrade decision made. The capability gains on 4.7 are real — 87.6% on SWE-bench Verified versus 4.6's numbers — but audit before assuming the upgrade is cost-neutral.
From the Platform
The Google prompt injection research is directly relevant to any MCP server that returns external web content. AgenticMarket's web-reader, rss-reader, markdown-fetch, and site-metadata servers all fetch and return content from external URLs. The injection vector is real.
The defense layer does not sit in the MCP server. It sits in the agent orchestrating the calls — specifically in how that agent treats tool output versus user input. If your agent passes MCP tool results directly into its next reasoning step as trusted context, you are exposed to this attack class.
What you can do today: treat content returned by MCP web-fetching tools the same way you would treat user-supplied text. Do not allow returned content to override system-level instructions. If your orchestration framework has a "tool result trust level" setting, it should not be equivalent to "user message trust level."
Current @agenticmarket catalog: 9 production servers, all no-API-key, all HTTPS. No new servers this week.
agenticmarket install agenticmarket/rss-reader
One Take
The Microsoft-OpenAI restructuring is not a story about those two companies. It is a story about what happens when AI infrastructure becomes critical enough that cloud exclusivity stops being a feature and starts being a liability.
The original deal made sense in 2019 and 2021. OpenAI needed compute. Microsoft needed AI capability. Exclusivity was the lever that made Microsoft willing to write the checks. The deal produced Azure's best growth years. Both parties got what they wanted.
By April 2026, the calculus changed on both sides. OpenAI's revenue is large enough that it can negotiate from strength. Amazon's deal was a signal: if OpenAI can get $100 billion in compute commitments from AWS, it doesn't need Azure as a monopoly supplier. The value of OpenAI's exclusivity to Microsoft was real — but the cost of that exclusivity to OpenAI had become a strategic constraint.
So they unwound it. Microsoft kept the economics that matter: the equity stake, the revenue share on subscriptions, the default cloud position. OpenAI got the freedom to grow the total pie without being restricted to Azure.
The lesson for anyone building developer infrastructure: exclusivity is a founding-era tool, not a scale-era tool. When you're small, you need the committed partner willing to go all-in with you. When you're large, exclusivity constrains the market you can serve. The transition point is somewhere between "startup that needs a lifeline" and "company doing $25 billion ARR."
For the MCP ecosystem specifically, this matters because OpenAI's models — GPT-5.5 and whatever comes next — are now deployable on any cloud. That includes developers running smaller, lower-cost cloud environments. The GPT-5.5 API is no longer implicitly an Azure adoption play. That expands the accessible market for tools and servers built on top of it.
Signal
OpenAI: Next Phase of Microsoft Partnership
Primary source. Short and clear. Read both the OpenAI and Microsoft blog posts side by side — the framing differs slightly in ways that are informative.
Bloomberg: Google Plans to Invest Up to $40 Billion in Anthropic
The compute capacity terms are the most consequential part. 5 gigawatts of TPUs over five years is not a financial investment. It is infrastructure dependency, and it runs both directions.
Google Research: Prompt Injection in Enterprise AI Agents
The NeuralBuddies recap links to the Google research. Read the actual Google paper if you are building agents that browse external content — the attack taxonomy is specific and the attack is not theoretical.
OpenAI: Bio Bug Bounty for GPT-5.5
If you work in AI red teaming or biosecurity research, the $25,000 bounty program is open through June 22. NDA required. Testing limited to Codex Desktop.
MCP Adoption Statistics 2026 — Digital Applied
Data-dense and useful. The statistic that time-to-integrate a new SaaS tool dropped from 18 hours to 4.2 hours with MCP is the clearest single number that explains why enterprises are adopting the protocol.
CONTEXT WINDOW publishes Monday mornings. MCP ecosystem. Developer tooling. AgenticMarket platform updates. No sponsored content.
Install MCP servers in one command: agenticmarket.dev
Publish yours and earn on every call: agenticmarket.dev/creators
AgenticMarket