Darwin’s CEO and Co-Founder, Noam Maital, explores how the evolution of writing tools, from early keyboard systems like China’s Wubi method to modern AI-powered prediction, reveals the quiet but powerful ways technology shapes how we communicate. Noam argues that as AI becomes more involved in finishing sentences, drafting language, and guiding ideas, the real priority is preserving human intent and authorship. What is critical, however, is intentional design that gives users control, ensures transparency, and protects diverse voices, positioning language technology as essential civic infrastructure. His message is clear: AI should accelerate our work, not define it, and trust is built when people remain firmly in the author’s seat.

Wang Yongmin was a young engineer in 1970s China, sitting at a desk buried under thousands of index cards. Day after day, he pulled Chinese characters apart, cataloging their pieces, searching for a way to fit an entire civilization of symbols onto a keyboard built for 26 letters. Five years of this. Five years in a locked room at a secret research institute because China feared something most of us never think about: that computers might erase its written language. And Wang simply refused to let that happen.
What makes his effort remarkable is not only the scale of the challenge but how different it looks through today’s lens. The intense manual labor that cost him half a decade could probably be handled by AI now in a matter of days. A vision model could sort the character components, a language model could design the structure, and a recommender could personalize it. Tasks that once required a human to sacrifice years of focus would now be automated. That contrast is not just technological progress. It is a reminder that the way we work with tools shapes the way we think and write.
After Wang released his Wubi method, which let people spell characters by their visual components, China continued to evolve. Pinyin became standardized. Typing shifted from shapes to sounds. Then came cloud-powered predictive typing, where the keyboard does not wait passively but guesses what you intend to say. Begin a name and it finishes it based on patterns across millions of users. Start a phrase and the next word appears before you fully reach for it. It is fast and genuinely helpful, but it raises a question that now reaches far beyond China. Who is steering the sentence?
Wubi preserved the diversity of Chinese dialects. Pinyin streamlined pronunciation. Cloud prediction optimized everything for speed. Each stage moved authorship slightly farther from the individual and slightly closer to the system. None of this was intentional. Convenience simply has gravity. Language drifts toward whatever makes life easier. And AI accelerates that drift.
We have already seen how design choices ripple through culture. The QWERTY keyboard subtly pushes people toward words with more right-hand letters, which is wild but true. After typing became ubiquitous, baby names with right-hand letters increased. And that top row that spells typewriter is there because nineteenth-century salesmen needed to look competent even when they were not. A tiny sales trick ended up shaping communication for generations. AI is going to magnify those quiet nudges in ways we cannot ignore.
That is why this moment matters. Predictive models are already finishing our sentences. Soon, they will help rewrite full paragraphs. Later, they may offer entire ideas that feel polished enough to accept. And because they are helpful, we will. The risk is not some dystopian takeover. It is a slow drift toward writing what the machine expects rather than what we actually mean.
But Wang’s story points in a different direction. He did not protect the Chinese writing system by resisting technology. He protected it by shaping technology with intention. He bent the tool toward the culture instead of letting the tool bend the culture. That is the attitude we need now. Not fear. Not nostalgia. Intentional design.
In practice, that means giving users real control over how much AI assists them, from no help at all to full drafting. It means transparency about why suggestions appear. It means tools that strengthen someone’s unique voice instead of flattening it. Far from slowing adoption, this kind of clarity builds trust, which is exactly what unlocks AI’s real potential in the public sector and beyond.
Language tools are now part of our civic infrastructure. They influence how communities communicate, how governments operate, and how decisions get made. The goal is not to restrain AI. The goal is to lay strong rails that support long-term, confident use. Auditing suggestion patterns, preserving diverse voices, and allowing people to carry their personal writing model with them are not constraints. They are foundations.
Speed is not the prize. Clarity and agency are. AI can replicate Wang’s five-year discovery process in an afternoon, which is precisely why we should embrace it. But authorship, the final say in what we put into the world, still belongs to us.