Conference Speakers¶
Sarah Deaton
The Git Commands I Avoided for 9 Years (and Why I Wish I Hadn't)
For the first nine years I worked in tech, I used Git the way many people do: just enough to feel competent, but not enough to recover if anything went sideways. I had my fair share of git rebase disasters, with panic setting in, then me deleting the repo and re-cloning to start fresh.
When I discovered git worktree, I realized I'd been stubbornly avoiding a lot of valuable Git features. I finally researched commands I'd ignored or avoided because they seemed too complicated, and realized there were many that would have saved me trouble, heartache, and wasted time over the years.
This talk is a writer-friendly tour of the Git commands I avoided for years, and the calm workflows they unlock. We'll cover practical tools like git worktree (stop switching branches constantly), git reflog (your safety net), and git rerere (resolve the same conflict once instead of repeatedly).
I'll anchor these concepts in real docs-as-code scenarios. By the end, you'll have concrete techniques you can use today to reduce friction and anxiety in your documentation workflows, especially when juggling multiple changes at once.
Mary Elise Dedicke
Documentation Thinking Beyond the Docs: Lessons from Decision-Facing Technical Writing
When I transitioned from software documentation to writing commercialization strategies for federally funded research and development projects, I expected to learn "business skills." Instead, I discovered that the documentation skills I already had—audience analysis, information architecture, and iterative refinement—were exactly what worked in high-stakes environments where skeptical reviewers decide whether to fund a technical idea.
Over several years of writing proposals and commercialization plans for computer vision, scientific visualization, medical imaging, and data & analytics projects at Kitware, I've applied documentation approaches in contexts where they aren't typically recognized. I've had to translate complex research for review panels spanning physicists, program managers, and domain experts, often with limited time to evaluate long, technically dense proposals. I've iterated on technical narratives where structure and clarity mattered more than adding detail, and where each revision cycle directly affected whether reviewers understood the core innovation.
This talk shares concrete lessons from decision-facing technical communication that strengthen any technical narrative. I'll walk through real scenarios and show artifacts such as discriminators tables, concept diagrams, and structural patterns that help readers quickly grasp value when they are skeptical or time-constrained. Rather than focusing on grants themselves, the talk centers on transferable documentation principles that proved most effective beyond traditional manuals and reference pages.
Alex Garnett
Writing the Docs when the Protocol is the Product
I joined Bluesky as their first Devrel hire in October 2025. At that time—stop me if you’ve heard this one before—they had two different docs presences that did not necessarily complement each other in terms of either coverage or UX. In fairness, there was a reason for this.
Bluesky is both the name of my employer and the name of what nearly everyone thinks of as our primary product. For this reason, it’s been necessary for us to have dedicated Bluesky docs, so people wanting to create bots and third-party clients can find implementation patterns for these sorts of features that are common to most social networks.
Our real primary product, however, is the Atmosphere. The Atmosphere is the ecosystem of the AT Protocol, a new way of building and managing social applications that work more like the web: open, interoperable, and not controlled by a single corporation. Bluesky APIs are almost all narrowly-scoped versions of Atmosphere APIs, implementing only the Bluesky Lexicon out of all the Lexicons in our network.
In this talk, you’ll learn about the decisions we made when rewriting and relaunching the Atmosphere docs with the goal of being able to subsume the dedicated Bluesky docs into a single point of entry. You’ll see how we’ve designed interfaces to our Lexicons that make them look less like an intimidating API reference, and more like an app catalog. And you’ll get to post about it!
Ethan Palm
What the Data Shows When Docs Meet LLMs (and Why I'm Excited for the Future)
There has been a shift. Instead of people asking me, "What is technical writing?" during small talk, they now ask, "Aren't you worried about AI taking your job?"
Not really. And I work for a company that built a writing agent and creates self-updating documentation workflows. We also host thousands of documentation sites—all of which are navigating the shift to AI-assisted content creation and AI-mediated content interaction. (Also, I, a human, chose to use that em dash.)
Data can be hard to come by for documentation teams. I am in the rare position where I have lots of it. And I want to share it. I can see how teams actually use AI tools, where automation succeeds and fails, and how technical writer roles are evolving.
In this talk, I'll share:
- What the data reveals about LLM impact on documentation (and what it doesn't)
- How users engage with documentation sites and what behaviors are changing
- Practical approaches for supporting our users with high-quality documentation
- A brief rant about em dashes
- Why I'm excited about the future of our profession
Annie Zempel
Pillars That Hold: Structuring Documentation Around User Progress
Highly technical documentation can contain everything users need and still leave them stuck. I learned this the hard way when I inherited an existing documentation set for a comprehensive security and log management product.
The content was often packed with useful detail, but it assumed a level of context many end users did not yet have. On paper, almost everything was there. In practice, however, many users still struggled with the same questions: Where do I start? Am I in the right place? And what am I supposed to do next?
Instead of rewriting the documentation from scratch, I redesigned its structure. I built an information architecture that maps content to the user journey from “nothing installed” to “operational outcomes.” My team and I then defined a set of ten core knowledge pillars that span the full journey. Each pillar became a stable zone with a clear purpose, explicit boundaries, and a “definition of done,” so users could orient themselves and move forward with confidence.
This talk shares:
- What we tried and learned through iteration
- How knowledge pillars helped us design documentation that supports wayfinding and progressive disclosure without depending on a specific toolset
- How stakeholder interviews and real-world feedback shaped the pillar boundaries and validated the journey
You’ll leave with a practical approach you can adapt to structure documentation around stages of progress, especially when you’re working with inherited content and users who are still upskilling.
Rakesh Pasupuleti
What I Learned Building an AI Documentation Auditor (And What It Found in 500+ Pages of Docs)
Last year, I spent three months manually auditing a major developer documentation portal. I found deprecated code examples, broken API references, and outdated content—but the real problem was that the audit was obsolete within weeks. So I built an AI tool to do it continuously.
This talk shares what I learned about building the tool and about the patterns of documentation debt it exposed.
You'll hear about:
- The 5 quality dimensions that actually matter for developer docs (and why "freshness" kills you faster than you think)
- Specific patterns of documentation rot: deprecated methods that copy-paste into broken code, version mismatches between prose and examples, API references to endpoints that no longer exist
- What AI can reliably detect vs. where it hallucinates (spoiler: it's great at syntax errors, terrible at "is this explanation clear?")
- A framework for prioritizing fixes when you have 47 issues and time to fix 5
What this talk is NOT: a product pitch, a claim that AI replaces writers, or a recommendation to use any specific tool.
What you'll take away: a checklist of common documentation debt patterns, practical criteria for evaluating AI-assisted tools, and a prioritization framework for when you have too many issues and not enough time.
Darlene ChaniMaya Postma
"Chat, Are We Cooked?" Help in the Moment: Designing Policy Documentation for High-Stakes, Real-Time Decisions
In the world of live streaming, when something unexpected happens, creators need answers in seconds. Picture this: A Twitch creator is live streaming to 500 viewers, when unexpected sexual content appears in the sponsored indie game they're playing. Chat erupts: "You're gonna get banned!" Is that true? No one knows. Moderators frantically search the Help Center on behalf of the creator, who must continue the stream. They find nothing conclusive. The creator does their best to manage the situation, completing the stream and deleting the recorded video (VOD) out of an abundance of caution. The creator spends days anxious over potential consequences.
As the education program manager for Twitch, I hear stories like this constantly. In December 2025, our team conducted a study with Partner streamers, who described our policy documentation as "mushy" and "wishy-washy"—not because we lacked information, but because it didn't help them make decisions under pressure in the moment as something urgent unfolded live on stream.
Traditional documentation assumes users have time to read and plan. Crisis documentation requires a completely different approach. This talk shares how Twitch is redesigning our Community Guidelines supporting documentation to function as real-time decision support, while remaining mindful of risks and other factors.
Adam Michael Wood
Drops of Jupyter: An Exploration of the Notebook Ecosystem
Many of us who write both prose and code fell in love with Jupyter notebooks quickly, but have also run into many of the limitations they pose for diverse types of technical communication, computational exploration, and software development. This talk looks at a number of alternative notebook and notebook-adjacent tools and how each serves the needs of different contexts and use cases.
While this is, on the surface, a talk about "tooling," it isn't promoting a specific tool or serving as a tutorial. Rather, it explores how a variety of tools have grown up to meet diverse needs, and how those different tools reflect and serve different paradigms, mental models, and processes.
Different notebook systems focus more on (or are better suited to) computational exploration, or data analysis, or narrative communication, or visual communication, or literate programming for software development. So yes, it's a talk about tools. But it's really a talk about different ways we use computation and prose together for different purposes.
Busayo Ojo
You Don't Need More New Contributors — You Need Onboarding That Actually Works
We're always talking about how hard it is to get contributors for open source documentation. But what if the problem isn't that people don't want to help? What if we're just making it way too hard for them to start?
I've spent the last few years working on documentation for projects like CHAOSS and running Hacktoberfest events, and I kept seeing the same thing happen: people would show up excited to contribute, and then... nothing. They'd get stuck, confused, or just disappear.
So I started paying attention to what was actually stopping them. And it wasn't what I expected.
This talk is about what I learned from watching people try (and fail) to make their first documentation contribution. I'll share the specific things that kept tripping people up, and what actually worked when we changed our approach.
If you've ever wondered why people aren't contributing to your docs even though they said they would, or if you want to make it easier for newcomers to actually get started, this talk might help.
Alina Desiatnikova
The Invisible Work: What It Takes to Be the First (and Only) Tech Writer on the Team
Being the first and only Technical Writer on a team doesn’t just mean documenting features—it means becoming a bridge, educator, advocate, and culture‑builder all at once.
Over time, I’ve learned that while documentation is often undervalued or taken for granted, it quietly shapes everything: from product clarity to better UI to an improved API experience. I’ve had the “Can’t AI just write this?” conversation—more than once—and I’ve learned to respond by showing that Tech Writers don’t just write. We structure and amplify the organization’s product knowledge.
Whether you’re just starting as the only Technical Writer or already leading documentation efforts solo, you’ll walk away with actionable insights—and the confidence to turn invisible work into visible, lasting impact.
In this talk, I’ll share:
- How to communicate the value of technical writing when your role is unclear or misunderstood
- Practical approaches to building effective documentation from scratch, without a dedicated docs team
- How to build trust and collaborate with engineers, product managers, and designers as a solo writer
- How to stay focused, resilient, and energized—without burning out—while carrying the documentation function alone
Christian Miles
The Most Human Documentation
In 2011, Brian Christian’s The Most Human Human explored what it means to be human through the lens of the Loebner Prize—a competition between the best chatbots at the time via a tournament of Turing Tests.
Looking back from 2026 the book remains insightful yet there’s something quaint about both the competition and the Turing Test itself.
Last year, I released an Open Source Vale.sh plugin that highlights features of writing that can indicate that it is generated by AI. Drawing from crowdsourced data from Wikipedia editors, it’s an example of human-curated workflow augmentation that guides and teaches rather than generates or replaces.
Applying this to production use cases illuminates the current landscape of assistive writing tools that amplify human craft. Attendees will leave with an understanding of:
- How bespoke tools can help navigate a socially complex environment created by the profligate proliferation of AI technologies
- The reality of systematizing rules in a world of squishy language
- How friction should be embraced when striving for the clearest possible communication of technical ideas
Aileen Mary
How We Deprecated 500+ Articles: Cleaning Up Documentation at Scale
Imagine a small SaaS product with a well-structured help center. The documentation is lean, accurate, and easy to maintain. Content ownership shifts organically from product managers to product marketers to the support team, and for a while, it works.
Then, the wake-up call from the support team: “We could improve bot deflection by 30% just by fixing the content.” Our solution articles were feeding our support bot, and the bot was only as good as the documentation it learned from.
As our company evolved from a single SMB product into an enterprise platform, documentation grew explosively. With each iteration, we added new articles without retiring old ones. Additionally, we accumulated FAQ articles, most of them duplicating content that already existed in long-form articles.
The challenge wasn't that we lacked documentation. It was that we had a documentation abundance problem.
In this talk, I’ll share:
- How we deprecated over 500 articles, nearly 30% of our Knowledge Base, through a systematic, data-driven process aligned with support metrics and product priorities
- How we audited content using time-based and traffic-based signals
- How we aligned product and support teams on deprecation decisions
- Our framework for updating, retiring, or merging content
- The measurable impact on bot accuracy and why quality beat quantity
Sarah Lin
How to Win Docs and Influence Robots: Improving Robot Accuracy Through Better Docs
Crafting quality documentation requires enormous time and energy. Yet, product users often find it faster and easier to query robots than to navigate a documentation site. (In this sense, ‘robots’ refers to web crawlers, large language models, answer engines, and overarching AI technologies that enable our users to consume our content.) This leads us to two related questions:
- How do we know if the robots give people accurate information?
- If the robot is incorrect, how can we improve its answers?
This session will walk participants through how to influence the accuracy of AI tool answers through product documentation. MongoDB's Information Architect will share the combination of writing guidelines and structural changes that have kept MongoDB’s internal accuracy metrics at 93% across ~8 answer engines.
We'll discuss:
- Principles of writing for robots in an AI world
- How ChatGPT, Gemini, Meta, and other LLMs' performance metrics were tracked
- Lessons learned about testing the same prompts across multiple answer engines over time
- How to prioritize improvements to the docs that are manageable for the size of your team
This talk will help you understand the difference between content designed solely for human consumption and content suited for both human and AI consumption.