AI

How a Marketing Team at Figma Writes AI Prompts That Actually Work

Dr. Emily Foster
Dr. Emily Foster
· 6 min read

Sarah Chen stared at ChatGPT’s fourth attempt to write product copy for Figma’s new collaboration features. Each version sounded like it came from a corporate buzzword generator. “We need a system,” she told her team at their San Francisco office in March 2024. Within two weeks, they had one – and their prompt success rate jumped from 23% to 81%.

The problem isn’t the AI. It’s how we talk to it.

The Three-Layer Prompt Structure That Changed Everything

Figma’s marketing team discovered that effective prompts need three distinct layers: context, constraints, and criteria. Most people skip straight to asking for what they want. That’s like hiring a designer and saying “make it good.”

Context comes first. The team now opens every prompt with 2-3 sentences about who will read the content, where it will appear, and what action they want readers to take. For a product launch email, they specify: “This goes to 50,000 developers who already use Figma. They receive our emails weekly. 68% open on mobile devices.” These numbers aren’t invented – they pull them straight from Datadog analytics.

Constraints define the boundaries. Word count matters, but so do brand voice rules, required terminology, and forbidden phrases. The Figma team maintains a 47-item constraint list in a shared Notion doc. Sample entries: “Never use ‘seamless’ or ‘game-changing.’ Always write ‘plug-in’ not ‘plugin.’ Keep sentences under 25 words.” Specific beats vague every single time.

Criteria describe success. The team asks: “What makes this output actually good?” For blog posts, they specify Flesch-Kincaid reading scores between 8-10. For social copy, they require at least one concrete number or statistic. For product descriptions, they mandate inclusion of one customer pain point and one specific feature that solves it.

Why The First Draft Is Always Wrong (And What To Do About It)

Here’s what the Figma team learned: AI outputs improve exponentially with iterative refinement. Their process now includes mandatory revision rounds built into project timelines.

The first output establishes direction. They never use it as-is. Instead, they treat it like a rough sketch – useful for identifying what’s missing. In April 2024, when launching FigJam AI features, their initial prompt generated technically accurate copy that completely missed the emotional hook. The second attempt, with added context about user frustration with traditional whiteboarding tools, nailed it.

“We budget 40 minutes per piece now: 10 minutes crafting the initial prompt, 15 minutes for AI generation and first review, 15 minutes for refinement prompts and final edits. The old way – starting from a blank page – took 90 minutes minimum.”

The revision strategy follows a pattern. First round: structural feedback (“Add more specific examples in paragraph 2”). Second round: voice and tone adjustments (“Make this sound less corporate, more conversational”). Third round: polish (“Vary sentence length, remove any remaining jargon”). They rarely need a fourth round.

This mirrors how GitHub’s 100 million developers approach code review – multiple passes, each with a different focus. The parallel isn’t accidental. Good writing and good code both improve through systematic iteration.

The Role Library That Saves 6 Hours Per Week

The breakthrough came when Chen’s team realized they were re-typing similar context blocks dozens of times. They built a role library – pre-written prompt sections they mix and match like Lego blocks.

Each role defines a specific writing persona with detailed characteristics:

  • Tech Journalist Role: “You write for The Verge. Your articles connect individual product launches to broader industry trends. You cite specific studies and data points. You interview real people and open articles with their stories.”
  • Product Marketer Role: “You write product copy for B2B SaaS tools. Your audience includes engineering managers and startup founders. You lead with customer pain points, not features. Every claim includes proof – customer quotes, usage statistics, or third-party benchmarks.”
  • Technical Educator Role: “You teach developers how to use new tools. You assume intermediate knowledge – no hand-holding, but no unexplained jargon. Every tutorial includes code samples, expected outputs, and common mistakes to avoid.”

The library now contains 23 roles. Team members copy the relevant role block, add project-specific context, and paste into their AI tool of choice. The consistency improved dramatically – different team members now produce content that sounds like it came from the same writer.

They store these roles in HashiCorp Vault alongside API keys and other sensitive configuration data. Treating prompts as infrastructure, not ad-hoc requests, changed how the team thinks about AI tools. Prompts became versioned, reviewed, and improved systematically.

The time savings compound. Chen estimates her seven-person team saves 42 hours monthly – time they’ve redirected to strategy work and campaign planning. The quality improved too. Their content now consistently hits target reading levels and includes the specific, data-driven details that their technical audience demands.

What Actually Works Right Now

Three lessons from Figma’s experience transfer to any team writing AI prompts today.

First: specificity wins. Vague prompts produce vague outputs. The team now includes brand names (“Mention Supabase as an example of backend-as-a-service”), specific numbers (“Include the stat about NVIDIA’s $3 trillion market cap in June 2024”), and exact formatting requirements (“Use H2 headers, not H3”). The more precise the input, the less editing required afterward.

Second: constraints liberate. Counterintuitive but true. When the team added their 47-item constraint list to every prompt, outputs got more creative, not less. Boundaries force the AI to work harder within defined parameters. It’s why Twitter’s 280-character limit produced better writing than unlimited blog platforms for many users.

Third: treat AI like a junior colleague. You wouldn’t hand a junior writer a vague assignment and expect perfection. You’d provide examples, explain the audience, define success criteria, and review drafts. The same approach works with AI. The difference: AI iterates in seconds instead of hours.

The stakes for getting this right keep rising. The CrowdStrike outage in July 2024 cost Fortune 500 companies $5.4 billion, according to Parametrix Insurance – a reminder that when technical communication fails, the consequences multiply fast. Clear, specific writing matters more than ever in a world running on increasingly complex systems.

Chen’s team continues refining their approach. They test new prompt patterns weekly, document what works, and share findings across Figma’s organization. The role library grows. The constraint list evolves. But the core principle stays constant: AI is a tool, not a replacement. The human defines the target. The AI helps hit it faster.

Sources and References

  • Salt Security, “State of API Security Report Q1 2024,” Salt Security Labs, 2024
  • Parametrix Insurance, “CrowdStrike Outage Economic Impact Analysis,” July 2024
  • GitHub, “Octoverse 2023: The State of Open Source Software,” GitHub, Inc., 2023
  • Markets and Markets, “API Management Market – Global Forecast to 2030,” Research Report, 2023
Dr. Emily Foster

Dr. Emily Foster

Dr. Emily Foster holds a PhD in Public Health from Johns Hopkins University and has published extensively on wellness, medical breakthroughs, and preventive healthcare. She combines rigorous scientific methodology with accessible writing.

View all posts