We Did Everything Right. Apple Banned Us Anyway. So We Made One Phone Call.
It's February 11th, around 3pm, and Jeremy posts in #general: "seems apple banned our apple account. iMessage is down for now. Ethan and I are aware and on it." Six weeks of work — an RFC, a Swift bridge, a full rewrite in JavaScript — and one sentence in Slack is how it ends. Ten thousand messages in twelve hours. Apple flagged us as spam. No warning, no appeal. We had done our research. We had used the same open-source library as our competitors. We had done it right. It didn't matter.
Within hours, Ali is on a call with a company called Linq. By the next morning, Jeremy has rebuilt the entire integration using Claude Code. Four hours of work to replace six weeks. A month later, we deleted every line of the old bridge and our PR bot left a review: "It's mostly deletions, which is the best kind of code. Hard to find bugs in code that doesn't exist."
This is the story of how we built iMessage support at Lindy three times in six weeks — and why the version that stuck is the one where we stopped trying to own the hard part.
Why iMessage Is a Hardware Problem Disguised as a Software Problem
Most messaging integrations are API calls. You hit an endpoint, you send a message. iMessage is not most messaging integrations.
To send an iMessage programmatically, you need:
- A physical Mac running Messages.app (Apple doesn't offer an API)
- A physical iPhone paired to that Mac for number registration
- An Apple ID dedicated to that phone number
- A bridge daemon that monitors the Mac's SQLite chat database and translates between your app and Messages.app
Each phone number requires its own Mac, its own iPhone, and its own Apple ID. There's no way to virtualize this. You can't run it on AWS (well, you can rent EC2 Mac instances, but you still need the phone). You can't share Apple IDs across numbers. The infrastructure scales linearly with phone numbers, and each unit costs real money in hardware.
Our RFC from December 2025 laid this out clearly. Jeremy evaluated two hosting options: MacStadium in Las Vegas at $119/month per Mac Mini, or Scaleway in France at 75 EUR/month. We went with MacStadium for the US region. The plan was a custom Swift daemon that would monitor ~/Library/Messages/chat.db using SQLite WAL (write-ahead logging), catch incoming messages in near-real-time, and forward them to Lindy via webhooks. Outbound messages would go through Apple's private frameworks — essentially programmatic UI injection into Messages.app.
The architecture was ambitious. The success metrics in the RFC said >99% delivery rate, <2s latency, zero data loss. On paper, it was thorough.
On paper.
Version 1: We Wrote It in Swift
Jeremy is the kind of engineer who reads the constraints and builds anyway. By mid-January, he had a MacStadium machine pending validation, an iPhone purchased, and a bridge daemon taking shape.
The problem was Swift. Nobody on the team writes Swift. The bridge needed to interface with macOS frameworks, parse Messages.app's internal database format, handle attachment types, manage the Apple ID session — and all of this in a language we were learning as we went. Ethan wrote comprehensive documentation for the bridge architecture (PR #16546, which the team called "exceptionally high quality"), but documentation doesn't fix the fundamental issue: when the bridge breaks at 2am, you need someone who can debug it, and nobody here thinks in Swift.
Here's what the bridge's health monitoring looked like — this is from our actual type definitions:
export const iMessageBridgeHealthSchema = z.object({
bridgeId: z.string(),
status: z.enum(['healthy', 'unhealthy']),
messagesAppRunning: z.boolean(),
lastMessageReceivedAt: z.number().optional(),
lastMessageSentAt: z.number().optional(),
pendingCommands: z.number(),
uptime: z.number(),
version: z.string(),
})
That messagesAppRunning boolean tells you everything about this architecture. We were checking whether Messages.app — Apple's consumer chat client — was still running on a rented Mac in Las Vegas. This was a production dependency.
So Jeremy did what any good engineer does when the implementation language is wrong: he started looking at what everyone else was using.
Version 2: We Did It Right
Poke AI and OpenCloud both use a library called BlueBubbles. It's an open-source JavaScript project that does roughly what our Swift bridge did — monitors the Messages.app database, handles send/receive, manages the Apple ID session — but in a language our team actually knows. Multiple companies in production with it. Active development. Real users.
We rewrote the bridge in JavaScript using BlueBubbles. By early February, we had formatting support working (bold, italics, edit, unsend) through a custom BlueBubbles build. Jeremy was shipping features fast — attachments, message payloads, YouTube link previews. The team decided at a sync meeting: "iMessage-first; avoid native app investment for now." iMessage wasn't just a feature. It was becoming the product strategy.
The one thing we didn't solve was the number problem. Flo, our CEO, had pushed for a multi-number setup like Poke AI: round-robin across several numbers, share a contact card with each user so they never see the switching. It's the right product answer. It's also $119/month per number in Mac Minis alone, plus an iPhone and Apple ID for each. At our stage, that math didn't work. We went with one number and made a conscious bet: we'd probably send a few hundred messages a day. Apple probably wouldn't notice.
By February 10th, Ethan flagged something in Slack: "BlueBubbles told us they aren't on iMessage." The detection was getting flaky. We noted it and kept shipping.
Twenty-four hours later, Apple banned our account.
The Ban and the Fork
February 11th, 3:14 PM. Jeremy's message in #general. Ethan spins up a war room: #2026-02-11-warroom-imessage. He rolls out an iMessage kill switch — a feature flag that disables the channel while we figure out what happened. SMS fallback activates for affected users.
Here's what happened: we sent 10,000 messages in twelve hours. We had estimated a few hundred per day. We were off by a factor of roughly 40x. Apple's spam detection doesn't care that you're a legitimate product with real users. Ten thousand messages from a single Apple ID in half a day looks like spam, because by almost any reasonable definition, it is.
175 messages were lost during the outage. Jeremy wrote a script to recover and replay them ("84/175... Done!"). At least one user — a trial customer on day two — received messages from multiple numbers during the chaos and started looking at Poke AI as an alternative.
We had two options:
Option 1: Buy a new iPhone. Set up a new Apple ID. Spin up another MacStadium instance. Reconfigure BlueBubbles. Warm the number slowly. Hope we don't get banned again. Cost: $119/month plus hardware, plus the engineering time, plus the same structural risk.
Option 2: Pick up the phone.
The instinct — the engineering instinct — is option 1. You own the stack. You understand the failure mode. You can fix it. You can build rate limiting, add number rotation, implement warming schedules. The problem is solvable with more engineering.
But "solvable with more engineering" is how you end up maintaining a Mac Mini in a Las Vegas data center for the rest of your life.
Four Hours with Claude Code and a Company We'd Never Heard Of
Ali had been in touch with Linq — a service that handles iMessage infrastructure as a managed platform. They provision the phones, manage the Apple IDs, handle number warming, and expose an API. You call their endpoint. They deliver the message. The hardware, the spam risk, the Apple ID gymnastics — that's their problem.
Jeremy started rebuilding on February 11th. By February 12th, Flo posted in Slack: "we are back up fyi. have been for a while." Jeremy shared four new phone numbers from Linq, all San Francisco area codes. He asked Ethan if they could cancel the MacStadium subscription.
The rebuild took about four hours, most of it done with Claude Code. It wasn't perfect — Jeremy spent the next week re-implementing features on top of the new integration. Read receipts. Attachments. Voice memos. Reply threading. Type indicators. Each one required mapping Linq's API to our internal message format. But the core — send a message, receive a message, don't get banned — was working in an afternoon.
The difference in architecture is visible in the code. Here's what sending a message looks like now:
// Build message parts
const parts: LinqMessagePart[] = []
if (text) {
parts.push({ type: 'text', value: text })
}
if (attachments && attachments.length > 0) {
for (const attachment of attachments) {
const attachmentId = await uploadAttachmentToLinq(attachment)
parts.push(attachmentId
? { type: 'media', attachment_id: attachmentId }
: { type: 'media', url: attachment.url } // fallback
)
}
}
await fetch(`${LINQ_API_BASE}/chats/${chatId}/messages`, {
method: 'POST',
headers: { Authorization: `Bearer ${apiToken}` },
body: JSON.stringify({ message: { parts } }),
})
No daemon. No SQLite WAL monitor. No messagesAppRunning check. Just an HTTP POST. The incoming side is similar — Linq pushes webhook events (message.received, message.delivered, message.read, reaction.added) and we route them through a single handler. The old bridge had us polling for commands every 1-2 seconds.
By February 17th, Jeremy posted his daily update: migration complete, MacStadium instance stopped. On February 19th, the cleanup PR landed — 76 files changed, 9,745 lines deleted. It removed apps/imessage-bridge (the Node.js bridge with BlueBubbles), 759 lines of bridge management service code, the MacStadium infrastructure configs, the WebSocket realtime channels, all of it. Gone.
What We're Still Figuring Out
It would be clean to end the story here. We don't get to.
We're running eight Linq numbers now. The top two are doing 5,000-6,000 messages per day — already bumping against Linq's recommended limit of 5K per number. Our send-to-receive ratio is 4:1, which means Lindy sends four messages for every one it gets back. Apple's spam detection doesn't just look at volume. It looks at whether the conversation is reciprocal. A 4:1 ratio is a red flag.
Ethan wrote a comprehensive anti-ban strategy the week after the migration:
- Load balance smarter — assign users to the lowest-traffic number at signup, not round-robin
- Track reciprocity — if a user stops responding, stop texting them
- Flip the onboarding flow — instead of Lindy texting the user first (outbound, looks like spam), have the user text Lindy first via a deep link with a pre-filled code (inbound, looks like a real conversation)
- Warm numbers gradually — don't dump full traffic on a new number day one
- Dynamic contact cards — per-user cards with an assigned primary number and random backup numbers for failover
The onboarding flip is the most interesting one. Our old flow had Lindy initiate contact — you sign up, and we text you. From Apple's perspective, that's an unknown number sending unsolicited messages. The new flow has the user text first, which makes the entire thread look like a conversation the user started. Same product, same feature, completely different signal to Apple's spam model.
We're also running an experiment with iMessage-first signup, inspired by what Poke AI, textarlo.ai, and boardy.ai are doing: skip the app entirely, let users interact with Lindy purely through iMessage from day one.
Flo's original multi-number suggestion — the one we rejected because it was too expensive at $119 per number — is exactly what we're running now through Linq. The difference is we don't maintain the hardware. We don't manage the Apple IDs. We don't wake up at 2am when a bridge daemon crashes. That's Linq's problem.
We spent six weeks learning that the right answer was to make it someone else's problem. We could have gotten there faster. But then we wouldn't have this blog post.