Claude Desktop's Enterprise Surface, a Thousand Versions Later
A Thousand Versions
I wrote about Claude Desktop’s enterprise configuration a couple days ago after unpacking v1.1617.0 — Bedrock routing, token budgets, credential helpers, sandbox controls. Undocumented, shipping in every copy.
Claude Desktop is already at v1.2581.0. Almost a thousand version bumps. The app structure changed under my feet — v1.1617.0 had readable JavaScript inside the Electron asar. The current version compiles core logic into native Rust and Swift binaries, with application code split across 553 minified JS chunks and a 10MB Vite bundle. You can still grep the bundles, but it takes more effort.
Everything from the original post holds. What’s different is who can use it.
Not Just AWS Anymore
The original post was a Bedrock story because Bedrock was the only provider with a real config surface. If your org ran on GCP or Azure, the enterprise schema didn’t have much for you.
That’s no longer the case. An IT team on GCP can now push a Vertex AI configuration through Jamf the same way an AWS shop pushes Bedrock. Set the project ID, the region, the model list, and choose an auth method. For service accounts, point inferenceVertexCredentialsFile at a JSON key file and it gets mounted into the sandbox. For user-level auth, there’s a full OAuth flow — push a GCP OAuth client ID and secret via MDM, and Claude Desktop handles the browser sign-in, token exchange, and encrypted refresh token storage. The app uses safeStorage for the token and clears it if the client ID changes.
defaults write com.anthropic.Claude inferenceProvider -string "vertex"
defaults write com.anthropic.Claude inferenceVertexProjectId -string "my-gcp-project"
defaults write com.anthropic.Claude inferenceVertexRegion -string "us-east5"
defaults write com.anthropic.Claude inferenceVertexOAuthClientId -string "NNN.apps.googleusercontent.com"
defaults write com.anthropic.Claude inferenceVertexOAuthClientSecret -string "GOCSPX-..."
defaults write com.anthropic.Claude inferenceModels -string '["claude-sonnet-4-6"]'
Azure AI Foundry works the same way. Set the resource name and an API key, and the app automatically computes the endpoint URL and adds ${resource}.services.ai.azure.com to the sandbox egress allow-list.
For organizations running their own LLM gateway — a proxy that sits in front of Bedrock or Vertex, or a custom routing layer — the gateway provider takes a base URL and an API key. It’s the one provider that auto-discovers available models by hitting /v1/models on the gateway, so you don’t need to push the model list separately.
All four providers share the same credential helper mechanism. If your org vends short-lived tokens from a central secrets manager instead of putting static API keys on machines, you point inferenceCredentialHelper at an executable and it handles the rest.
The env var isolation I documented in the original post scales the same way. Each provider class injects its own set into the embedded Claude Code process, then the parent strips all three provider flags — CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, CLAUDE_CODE_USE_FOUNDRY — so they don’t leak across sessions.
Agent Telemetry in Your Stack
If you’re deploying Claude Desktop across an org, you probably want to know what it’s doing. Session durations, tool invocations, token usage — the kind of data you already collect for other developer tools.
An admin can now push three MDM keys — otlpEndpoint, otlpProtocol, otlpHeaders — and agent sessions start exporting OpenTelemetry spans to whatever collector you point it at. Datadog, Grafana Cloud, a self-hosted OTEL collector — if it speaks OTLP over HTTP or gRPC, it works. The headers field handles auth tokens for services that need them.
This means Claude agent activity can live in the same dashboards as your other infrastructure telemetry, without depending on Anthropic’s own analytics pipeline.
Tighter Sandbox Controls
Admins can now specify which external hosts the agent sandbox is allowed to reach via coworkEgressAllowedHosts. Combined with the existing allowedWorkspaceFolders and disabledBuiltinTools, an IT team can lock down what the agent can access, what directories it can open, and what tools it can use — all pushed through MDM.
One detail I found interesting: when the inference provider is Bedrock, the app automatically disables WebSearch from the agent’s tool list. It’s a single conditional in the source. Bedrock users don’t get web search.
The agent environment also picked up local user settings for web search, tool disabling, scheduled tasks, file preview, and web fetch routing. These are preferences, not admin-enforced policies, but they show the agent mode growing more configurable.
References
- Original post: Reverse-Engineering Claude Desktop’s Enterprise Configuration
- Claude Code with AWS Bedrock
- Google Cloud Vertex AI
- Azure AI Foundry
- OpenTelemetry Protocol
- Apple CFPreferences