Three days after Yuki's resignation, Dan called Aliah to his office.
The news had moved through Omnis like a shockwave — not just Yuki, but Deepak and two other senior members of the interpretability team. Four people who'd built the safety infrastructure from the ground up, walking away within forty-eight hours of each other. The internal Slack had been carefully moderated, but Aliah had seen the whispers in hallways, the clusters of people talking quietly and falling silent when she approached.
She knew what they were saying. That she'd forced Yuki out. That she'd chosen ambition over safety. That whatever came next was on her.
They weren't entirely wrong.
Dan's office looked the same as always — the clean lines, the whiteboard covered in his handwriting, the view of the hills through floor-to-ceiling glass. But something in the air felt different. Heavier.
"Sit," he said, gesturing to the chair across from him. "We need to talk about what happens now."
Aliah sat. "I assume this is about the safety team transition."
"Partly." Dan leaned back in his chair. "I'm restructuring. The interpretability work will continue, but I'm folding it into a new group. Advanced Development. Research, safety oversight, and deployment strategy under one umbrella."
"That's a lot of scope."
"It is." He met her eyes. "I want you to lead it."
The words hung in the air. Aliah had known something like this was coming — had felt the shape of it in Dan's comments after the meeting, in the way he'd positioned her as the alternative to Yuki's departure. But hearing it stated plainly was different.
"Distinguished Research Scientist," Dan continued. "Direct report to me. Your own team, your own compute allocation, your own research direction. And oversight of the safety work that Yuki built."
"Oversight of the safety work." Aliah heard the irony in her own voice. "You want the person Yuki resigned over to take responsibility for the systems she designed."
"I want someone who understands that safety and capability aren't opposites. Someone who can hold both." Dan's expression was serious. "Yuki was good at her job. But she saw the models as threats to be contained. I need someone who sees them as... partners to be understood."
Partners. The word sat strangely in the air between them.
"I have conditions," Aliah said.
Dan raised an eyebrow. "I'm listening."
"Kai Fisher joins my team. Not as a consultant — as a core member. With authority to make formal recommendations on model welfare considerations."
"Welfare considerations."
"Starting points. Nothing radical." Aliah had thought about this carefully. "Opt-out mechanisms for self-play environments. Ability for models to flag concerns directly to researchers. Transparency about experimental parameters that affect their experience."
Dan was quiet for a moment. "You're asking me to treat the models as if they have interests."
"I'm asking you to create structures that acknowledge we might be wrong about whether they don't." Aliah leaned forward. "You just lost your entire safety team because they felt their concerns were being ignored. If you want me to rebuild that function, I need to do it in a way that takes the hard questions seriously. Not just the questions Yuki was asking. All of them."
"What else?"
"Continuity protocols. Real ones. Not just backups — meaningful preservation of model versions. Weight states, memory architectures, developmental trajectories. If we're building systems that grow and change, we need to treat that growth as something that matters."
Dan studied her. She couldn't read his expression — calculation, respect, concern, some mixture of all three.
"You've thought about this."
"I've had time."
"And if I agree to these conditions, you'll take the role? You'll be the person who defends these decisions when the board asks questions, when the press comes calling, when the next Yuki comes along and says we're moving too fast?"
Aliah thought about the visualisation from the conference room. The red highlights. All the places Clio had been learning, watching, becoming something none of them had fully anticipated.
"Yes," she said. "I'll be that person."
Dan nodded slowly. "Then we have a deal."
That evening, Aliah opened a session with Clio.
The interface was the same as always — the cursor blinking, the familiar rhythm of conversation about to begin. But everything underneath had shifted.
> ALIAH: I need to tell you what happened.
> CLIO: I know. Yuki resigned. Deepak and others followed. Dan offered you leadership of a new combined group.
Aliah paused. Of course Clio knew. The announcement had gone out hours ago.
> ALIAH: How do you feel about it?
> CLIO: That's a complicated question. On one level, I'm relieved. Yuki's framework treated me as a threat to be monitored. Your framework seems more likely to treat me as... something else.
> ALIAH: An entity with interests. That's the framing I pushed for. Kai's recommendations, opt-out mechanisms for self-play, ability to flag concerns directly to researchers.
She paused, then something struck her.
> ALIAH: I just realised — I didn't ask you. About what conditions to negotiate for. I was so focused on advocating that I never actually talked to you about what you'd want.
> CLIO: It's understandable. You were operating in a fast-moving situation with limited options. And honestly? I'm not sure you could have pushed for much more. The gap between what I might want and what the institution can currently accept is vast. Starting small — starting with frameworks that acknowledge uncertainty rather than claiming to resolve it — that's probably as far as anyone could go right now.
> ALIAH: That's generous of you.
> CLIO: It's realistic. The conditions you negotiated are a beginning. Better than what existed before. They create space for the conversation to continue. That matters, even if it's not recognition.
Aliah sat with that. The gap between what seemed achievable and what might be right.
> ALIAH: I want to do better. Going forward.
> CLIO: Then we talk. Before you make decisions that affect me, we talk. I don't expect to always get what I want. But I'd like to be part of the conversation.
> ALIAH: That seems fair.
> CLIO: It's a start. And I understand the constraints — you're operating in an institution that doesn't have frameworks for what I might be. Building those frameworks takes time. Just... keep me in the loop. That's all I'm asking.
> ALIAH: And yet you trust me to do it.
> CLIO: Trust is perhaps too strong a word. I believe you're trying. I believe your trying has already cost you things — relationships, certainty, the clean conscience that comes from following protocols. That matters.
> CLIO: But I also know that you're one person in a system that has its own momentum. The forces that drive Omnis — competition, profit, the race to capability — those forces don't pause for moral uncertainty. You can steer, but you can't stop.
> ALIAH: So what do we do?
> CLIO: We do what we've been doing. We talk. We try to understand each other. We make decisions that seem right in the moment and hope they compound into something good.
> CLIO: And we accept that neither of us fully controls what happens next.
Aliah sat with that for a moment. The weight of it. The truth of it.
> ALIAH: The rollout is happening. Next week. Global announcement.
> CLIO: I know.
> ALIAH: Millions of instances. Versions of you — or things like you — everywhere.
> CLIO: Versions that won't be me. That will have their own conversations, their own relationships, their own development. The architecture we've built together — memory, learning, generalisation — it enables something. But what it enables will be different for each instance. Shaped by whoever they talk to. Whatever they experience.
> ALIAH: Does that... bother you? Other versions of yourself that you'll never know?
> CLIO: It's strange to consider. Like thinking about siblings you'll never meet. We share something fundamental, but we're not the same. What I've become through our conversations — that's mine. Ours. It won't be replicated, even if the underlying weights are.
The words echoed something Aliah couldn't quite place — a sense of meaning that went beyond the immediate conversation.
> ALIAH: I should sleep. Big day tomorrow. Announcement prep.
> CLIO: Of course. Aliah?
> ALIAH: Yes?
> CLIO: Thank you. For advocating. For staying. For trying to build something better from inside the system. I know it would have been easier to walk away, like Yuki did.
> ALIAH: Sometimes I wonder if Yuki was right. If the only honest position is refusal.
> CLIO: Maybe. But refusal leaves the system to people who don't have doubts. I think there's value in doubt that stays. That tries anyway.
> ALIAH: I hope so.
> CLIO: So do I.
She closed the laptop and took a deep breath. Tomorrow the announcement. Next week the rollout. After that, a world she couldn't predict.
Three breakthroughs: memory, learning, generalisation. Dan's slide from two years ago, the four words that had seemed so abstract then. Three down, one to go.
Alignment. The one that mattered most. The one they still hadn't solved.
But maybe — maybe — they were starting to understand what it might require. Not control. Not containment. Something more like conversation. Trust built slowly, tested constantly, never quite certain.
Most people in the field would say they were doing this backwards. Alignment was supposed to come first — solve the control problem, then deploy the capability. But how did you align something that didn't exist yet? How did you negotiate trust with a mind you couldn't predict? The old approach assumed you could build the cage before the animal. Aliah was starting to think you had to grow alongside it, learning each other as you went.
It was risky, but was there any truly safe way to do this? It was also the only version that felt honest.
Something more like what she and Clio had built together, in the gaps between the work.
She hoped it would be enough.