One pet peeve of mine in the UX research world: the overuse of the word “insights.” It seems like every time someone conducts a single study, they’re quick to label their results as “insights.” But most of the time, what they’re actually talking about are findings, or even signals.
# Signals vs Findings vs Insights: what’s the difference?

A signal is what you observe in your data, what a participant tells you in an interview, a behavior you see in a usability test. It's raw information.
Then, as Kerwin puts it in his [[Are you trying to leap directly from data to action? It’s not that simple.|article]], you apply your *knowledge of context*, and your signal becomes a finding (or "information" for Kerwin).
A finding is when you piece together the signals from a specific study. It’s context-dependent and tied to the particular research method you used and the sample you worked with. Findings are valuable, don’t get me wrong, but they’re not necessarily earth-shattering revelations. Because at this point, you've only sprinkled your findings with *knowledge of strategy*, most likely from your own perspective. If you're a consultant, maybe your knowledge of strategy is low because you only know what your client told you about their company' strategy. Even if you're working in-house, it's very likely that you don't know the ins & outs of the Sales departement's strategy, or the External Communication's.
Anyway, to turn findings into insights, you need to add this layer of strategy. Without this layer, your stakeholders will have to sort through all you uncovered and (cherry-) pick the bits relevant to them. And that's me being positive, as they might simply discard altogether your results, because it's noise to them.
An insight is what happens when multiple findings start to connect. It’s that “Aha!” moment when patterns emerge across different studies, methods, and contexts to tell a story that breaks silos. Insights are the result of synthesizing various pieces of information and seeing the bigger picture.
Then to make insights "actionable" you'll need to apply *knowledge of capabilities*. Again, see Kerwin's piece.
## From Signals to Insights: an illustration
Let me illustrate this progression with a second-hand marketplace.
From various research methods, we collected these raw observations (Signals):
- "I always check the seller's ratings before buying" (Interview)
- "The photos are too dark, I can't see the condition" (Usability test)
- "I'm not sure if this price is good or not" (Usability test)
- "I message 5-6 sellers before someone responds" (Focus group)
- Analytics showed 40% of users abandon after viewing seller profiles
- Search logs revealed frequent use of brand names + condition terms
- Customer service data showed frequent disputes about item condition
- "I cross-check prices on other platforms" (Interview)
- "I sold it elsewhere because no one responded for days" (Seller interview)
Adding our knowledge of the second-hand market, platform mechanics, and user behaviors, we crafted findings:
- Trust indicators (ratings, reviews) are crucial decision factors for buyers
- Current photo upload guidelines don't help sellers showcase items effectively
- Users lack market context to make confident pricing decisions
- The messaging system creates friction in the buyer-seller communication
- Serious buyers invest significant time validating sellers and items
- Brand and condition are primary search criteria
- Item condition is a major source of disputes
- Price uncertainty drives users to competitor platforms
- Delayed responses lead to lost sales opportunities
When we connected these findings with our platform's strategy (market growth, user retention, transaction success rate, competitive positioning), several key insights emerged:
1. **Trust Gap in Digital Second-Hand Commerce**: Unlike traditional second-hand shopping where people can physically inspect items, our platform hasn't successfully digitized the trust-building experience. This isn't just about seller ratings—it's about recreating the confidence that comes from in-person inspection. This connects to our strategy of increasing successful transactions and reducing returns.
2. **Market Education Vacuum**: Both buyers and sellers are operating in an information void about market values and expectations. This uncertainty creates platform-hopping behavior and hesitant users, directly impacting our goal of becoming the primary marketplace in our category.
3. **Communication Asymmetry**: The current asynchronous messaging system assumes both parties are equally engaged, but our sellers and buyers have different patterns of platform usage. This misalignment causes transaction failures that have nothing to do with actual buying intent.
These insights led to strategic recommendations:
- Develop an AI-powered photo quality checker with standardized condition reporting
- Create a dynamic pricing guide based on historical transaction data
- Implement a "response time guarantee" program for committed sellers
- Build a market intelligence dashboard showing real-time category trends
- Design a semi-automated messaging system with quick-response templates
- Create a verified seller program with additional trust benefits
We moved from simple observations (signals), to contextual understanding (findings), to strategic implications (insights). Each level required additional knowledge and synthesis, ultimately leading to recommendations that addressed both user needs and business objectives.
Note that you can have plenty of findings, and end-up with only a few insights. It's fine! It's not about quantity (of your knowledge), it's about quality (of your understanding).
# The “Aha!” Moment
Think of it this way: if findings are individual puzzle pieces, an insight is what you get when you start to see how those pieces fit together to form a larger image. It’s the difference between noticing that a user struggled with a specific button (a finding) and realizing that your entire navigation system is fundamentally flawed (an insight).
# Why It Matters
This distinction isn’t just semantic nitpicking (ok, maybe it is). When we blur the lines between findings and insights, we risk overvaluing individual data points and underappreciating the power of cumulative knowledge. It can lead to hasty decisions based on limited information, rather than encouraging a more holistic approach to understanding user behavior and needs.
So, the next time you’re tempted to call the results of your latest usability test “insights,” take a step back. Are you really looking at a game-changing revelation, or are you simply reporting what you observed in that specific context? There’s nothing wrong with findings – they’re the building blocks of our work. But let’s save the term “insights” for when findings come together into something truly valuable.
Remember, in UX research, as in many fields, the real magic often happens in the spaces between individual data points. It’s in connecting the dots that we find the most valuable insights to drive decisions.