9/11/2025 12:00:00 AM
Jennifer Keough: CEO and Co-Founder, Legal Notice Expert, JND Legal Administration
Gina Intrepido-Bowden: Vice President, Legal Notice Expert, JND Legal Administration
As originally published in the New York Law Journal 9/11/2025
Class notice operates at the intersection of due process and practical communication. While Rule 23 sets forth the standard of “reasonability,” most practitioners know the real challenge lies in execution: not just reaching people, but ensuring they understand the message and take appropriate action.
In recent years, other industries, particularly in consumer marketing, have moved well beyond assumptions when it comes to audience behavior. Major brands now use synthetic audience modeling: AI-generated personas that simulate how different segments of the population are likely to respond to a given message. These models are tested in advance, allowing campaign designers to fine-tune language, visuals, and delivery methods before anything goes live.
In legal notice, that kind of pre-launch testing has not yet been widely adopted. But it holds significant potential to modernize how we plan and validate notice programs, especially as courts continue to scrutinize adequacy not only in terms of reach, but in terms of actual comprehension and engagement.
Moving Past “Reasonably Calculated”
Most notice programs still rely on educated assumptions about how people receive and process information—assumptions about literacy, language, digital access, and trust in legal institutions. While those assumptions may be enough to satisfy technical adequacy, they don’t always reflect how people behave in the real world. That’s especially true in large or diverse classes, where engagement patterns vary widely.
What’s missing is feedback, prior to launch, about how a given notice strategy is likely to perform with real people. And in most campaigns, that feedback doesn’t arrive until it’s too late to make meaningful adjustments.
Simulating Behavior Before It Happens
Synthetic audience modeling addresses this gap directly. These tools draw on large datasets from prior legal notice campaigns to build behavioral simulations of prospective class members. These simulated personas reflect demographic, psychographic, and behavioral factors, and accurately predict how a given message or communications strategy will perform.
Rather than relying on legacy templates or theoretical assumptions, administrators can use these models to preview how different strategies are likely to perform across the class. It’s a way to ground strategic choices in evidence, rather than instinct.
Front-Loading Rigor into the Planning Process
What makes this approach especially valuable in the legal context is timing. Synthetic audience testing happens before the campaign is deployed. That means the testing environment can serve as a risk filter—helping teams refine notice language, visual hierarchy, subject lines, and delivery mechanisms early in the process, rather than relying on post-launch analytics.
Adequacy vs. Engagement
We all know that achieving a minimum 70% reach doesn’t guarantee 70% comprehension—or even meaningful attention. A single exposure to a banner ad, email or postcard rarely drives action, especially when legal language is dense or poorly tailored to the recipient’s context.
Synthetic testing can identify which variables matter most for engagement: whether plain-language summaries perform better than formal phrasing; whether digital channels are outperforming—or underperforming—among certain subsets of the class; whether visual design changes affect comprehension across literacy levels. These insights help build notice programs that don’t just comply, they connect.
Learning from the Past, Previewing the Future
Unlike machine learning tools that optimize only after launch (e.g., Google Display campaigns, programmatic, etc.), synthetic audience modeling works in advance. That’s a key distinction. Machine learning requires real-time behavioral data to adapt, which makes it valuable for ongoing marketing—but less useful for one-time, court-approved campaigns that can’t afford trial-and-error.
Synthetic testing, by contrast, is based on accumulated data from past notice efforts. It allows legal teams to test proposed strategies against modeled outcomes before anything is filed or sent. The predictions can be benchmarked against previous results, giving teams a concrete sense of what’s likely to work and why.
A More Empirical Standard
None of this is in conflict with the standards of Rule 23—rather this goes beyond the ideals of due process. Courts don’t just want evidence that a notice went out. Increasingly, they want to see that the notice had a fair shot at being understood and was acted upon. By introducing behavioral modeling and pre-deployment testing into our toolkit, we move toward a more empirical, outcome-focused standard of notice adequacy.
Tools that simulate engagement across platforms, demographics, and content variants, provide a practical and scalable way to make that shift. They’re not a substitute for legal judgment, but they offer a level of strategic foresight that traditional planning methods can’t match.
As the bar for meaningful notice continues to rise, synthetic audience testing may soon be less a competitive advantage—and more a professional expectation.
If you are interested in any of our services, you can request a proposal by filling out the fields in this section.
CONTACT US
JOIN US