19 min read

The Logic of Design Revolutions

In 1962, Thomas Kuhn published The Structure of Scientific Revolutions and quietly dismantled one of science's most flattering self-portraits. Science, Kuhn argued, does not advance by the steady accumulation of truth. It advances by crisis. A dominant paradigm — a shared set of assumptions, methods, and values — guides what he called "normal science" until anomalies accumulate that it cannot explain. When the pressure becomes unbearable, the paradigm doesn't just get revised. It collapses, and a new one replaces it. Not because the new one is necessarily more true, but because it better handles what the old one couldn't.

Replace "science" with "design" and the argument holds surprisingly well. Design doesn't progress in any linear sense. It shifts. Paradigms rise, dominate, accumulate problems, and eventually break — not because the next thing is objectively better, but because it better reflects the values, aesthetics, and assumptions of its moment. And like Kuhn's scientific revolutions, design revolutions tell us less about the nature of things and more about the nature of us.


What Conventions Are, and Why They're All We Have

Before examining specific paradigm shifts, it's worth being clear about what conventions are doing in design.

Design is communication. And all communication depends on shared conventions — agreed signals that carry agreed meanings. The concept of affordance, first described by J.J. Gibson and later applied to design by Don Norman, is built on exactly this: whether a user perceives that some action is possible depends on their prior experience with conventions that conform to a similar pattern. The knowledge of what to do with a button comes not from the button itself, but from everything you've learned about buttons before.

As Lea Verou writes: "Communication is all about mutually understood conventions. In the language of user interfaces, affordances and signifiers are the vocabulary, and the same principles apply. Learnability is not an intrinsic property of a UI — it is a function of the context, cultural and otherwise, in which it is used."

Some conventions borrow metaphor from the physical world. Tabs are a metaphor for the tabs in a filing cabinet. As Léonie Watson explains, radio buttons take their name from the preset station selectors on car radios — designed so that only one could be pressed at a time. Others are entirely arbitrary, acquiring meaning only through repeated use: the underlined link, the magnifying glass for search, the hamburger icon. None of these were inevitable. They are learned. And once they're learned widely enough, they become effectively invisible — which is the highest compliment a convention can receive.

This means there is no external rulebook for design. There are no physics of interaction that hand us correct answers. Conventions can be effective or ineffective, inclusive or exclusionary — but they cannot be evaluated in isolation, only relative to external values like clarity, honesty, and inclusion.

That observation is both liberating and consequential, and it shapes everything that follows.


The Paradigms

Skeuomorphism: The World You Already Know

[Archive.org example to be verified — Apple.com circa 2012, iOS Notes app]

The dominant interface paradigm of the 1990s and 2000s was skeuomorphism — the design practice of making digital objects resemble their physical counterparts. Calendars looked like paper calendars. Notepads had lined pages and stitched leather bindings. Buttons were raised and shadowed to look pressable. The logic was straightforward: if users were unfamiliar with the new medium, borrow the conventions of the old one.

This was, in Kuhnian terms, a coherent paradigm with a clear internal logic. It extended existing human knowledge into digital space. Raised buttons looked pressable because they mimicked the physical feedback of pressing something real. The paradigm worked, and its anomalies were, for a long time, manageable.

From an accessibility standpoint, skeuomorphism had genuine merits — particularly for new users. Visual cues were explicit. Raised buttons looked pressable; sliders looked slideable. For users with cognitive disabilities or those unfamiliar with computing, the metaphor carried meaningful orientation. The costs were visual complexity and the cultural specificity of some metaphors: a floppy disk as a save icon means nothing to someone who has never encountered one.

Flat Design: The Revolution Arrives

[Archive.org example to be verified — Apple.com circa 2013, post-iOS 7 launch]

By the early 2010s, the anomalies had accumulated. Skeuomorphism began to feel cluttered, dated, and visually dishonest — pretending the digital world was something it wasn't. The paradigm shift arrived most visibly with iOS 7 in 2013 and Microsoft's Metro design language. Out went the leather and the shadows. In came clean geometry, bold typography, and minimal ornamentation.

This was a genuine design revolution in the Kuhnian sense: not a refinement, but a wholesale replacement of assumptions. The new paradigm insisted that digital objects should look digital. Honesty over metaphor. Reduction over decoration.

But flat design's anomalies emerged almost immediately, and they were serious. As Verou notes, research consistently shows that flat buttons are less effective than their raised counterparts, causing more uncertainty and requiring more attention to identify as interactive. When everything is flat, nothing signals affordance by default. The visual vocabulary of interactivity — the cues that tell you this is a button, this is a text field, this is a link — was stripped away in favour of aesthetic coherence.

The accessibility implications were severe. Contrast ratios suffered as designers reached for light grays on white. Interactive elements became visually indistinguishable from static ones. For users with low vision, cognitive differences, or motor impairments who rely on clear visual targets, flat design was frequently not a neutral stylistic choice — it was a barrier. WCAG success criteria for non-text contrast (1.4.11) and focus visibility (2.4.11) exist in large part as responses to the damage flat design inflicted.

Material Design: The Attempted Reconciliation

[Archive.org example to be verified — material.io circa 2014-2016]

Google's Material Design (2014) was an attempt to resolve the flat design crisis without abandoning its premises. The paradigm introduced elevation — a metaphorical z-axis that gave interface objects depth and shadow, not to imitate the physical world but to communicate hierarchy. Cards cast shadows to indicate they were interactive. Floating action buttons floated. The elevation said: something here can be acted upon.

This was a paradigm trying to solve its predecessor's anomalies while remaining within the revolution's spirit. Partly it succeeded. But the accessibility problems of minimalism persisted, and Material Design introduced new tensions: elaborate motion design that could cause vestibular issues for users with balance and motion-sensitivity disorders (a problem WCAG later addressed through 2.3.3 Animation from Interactions); colour-as-the-only-differentiator patterns that failed users with colour vision deficiency; dense information layouts that reduced touch target sizes below usable thresholds.

Neumorphism: The Aesthetic That Almost Was

[Archive.org example to be verified — Dribbble showcase circa 2020]

Around 2020, neumorphism briefly captured the imagination of the design community. The aesthetic combined flat design's clean surfaces with skeuomorphism's dimensional cues — soft elements that appeared to extrude from or press into their background, rendered through subtle dual shadows. It was visually distinctive, even beautiful in controlled conditions.

It was also, from an accessibility standpoint, almost entirely indefensible.

Neumorphic interfaces depend on very low contrast between the element and its background. The shadow effects that define the style require the foreground and background to be nearly the same colour — structurally incompatible with WCAG contrast requirements, not marginally non-compliant but fundamentally so. For users with low vision, or those using screens in high ambient light, neumorphic interfaces often become invisible. The paradigm revealed something important: a design revolution can be driven entirely by aesthetic logic, with almost no consideration of who can actually use the result.

Neumorphism faded quickly, which suggests the profession has some capacity for self-correction. But its brief prominence is a case study in how paradigm shifts can be active regressions when measured against external criteria like inclusion.


Flow Conventions: The Layer Nobody Mentions

Design paradigms are usually discussed in terms of visual style. But the web introduced a second, equally powerful layer of convention: the templated interaction flow.

Over two decades, users have internalised not just what interfaces look like but how they unfold. These flow conventions operate across sites and products as shared cognitive scripts — and breaking them carries a cost that is poorly understood because it is invisible until something goes wrong.

Consider the most familiar examples:

Login and signup flows. Users expect: email field, password field, submit button, forgot password link nearby. A signup form that collects ten fields before revealing a password step, or that places the "already have an account?" link somewhere unexpected, breaks a script so deeply internalised that users often interpret it as a bug rather than a design choice.

E-commerce product discoverability. The mental model: browse a category grid, filter by attributes on the left (or top on mobile), click a product to see a detail page, find price and primary CTA above the fold. Radical departures from this pattern — product detail pages that require scrolling past editorial content to find a price, for instance — consistently underperform regardless of aesthetic quality.

Checkout flows. The expected sequence: cart review, delivery details, payment, confirmation. This pattern is so established that users experience anything that reorders or collapses these steps as disorienting, even when the reordering is technically more efficient.

Onboarding flows. The convention: brief welcome, progressive disclosure of features, a single early "quick win" to demonstrate value. Onboarding that front-loads complexity or skips the quick win loses users who haven't yet decided to commit.

What these flows share is that they are conventions in precisely Kuhn's sense — not laws, not optimal solutions derived from first principles, but shared agreements that function because they are shared. They represent accumulated UX decisions that have been tested, iterated, and gradually stabilised across thousands of products. Their value is not intrinsic but relational: they work because users have learned them.

This has an important accessibility dimension. Flow conventions benefit users with cognitive disabilities and those who rely on screen readers, because predictable structure reduces working memory load. When a checkout behaves like every other checkout, the user can direct their attention to the task rather than the interface. Disrupting flow conventions in the name of brand differentiation is not just a usability risk — it is an accessibility risk.

Which brings us to a specific case worth examining in detail.


A Conversation at the Whiteboard

The scene: a product design review for a redesigned checkout flow. Maya is the lead designer. Rohan is the accessibility specialist. There is a whiteboard. There is also one of those artisanal biscuits that appear at design reviews and nobody knows who brought.

Maya: So the right column — delivery summary, order total, and the coupon field — all stays exactly where users expect to find it. It's a two-column layout, very conventional, trust signals on the right, action on the left. I've seen this pattern on every major e-commerce site.

Rohan: I don't disagree with the layout. I disagree with the DOM order. Visually the coupon field is in the right column, which is fine. But in the markup it comes after the Place Order button. So a screen reader user tabs through the entire left column, reaches the CTA, and if they press Enter there — coupon unapplied — that's it. They never knew the field existed.

Maya: But the coupon field is literally on the same screen. It's not hidden.

Rohan: It's hidden to anyone navigating linearly. Which is a significant portion of keyboard users, and essentially all screen reader users. The visual convention you're describing — right column, coupon field — is real and it's right. But a sighted mouse user can perceive both columns simultaneously. A screen reader user experiences the page as a sequence.

Maya: So your solution is to move the coupon field to the left column? That breaks the entire visual convention. Users will look for it on the right and not find it.

Rohan: No. My solution is to move it in the DOM — not visually. Keep the two-column layout exactly as it is. But reorder the source so the coupon input appears in the tab sequence before the Place Order button, even though it renders on the right.

Maya: (pause) CSS order versus DOM order.

Rohan: Exactly. The visual presentation is yours to keep. The tab order is mine to fix. They don't have to be the same thing.

Maya: (picks up the biscuit) Why didn't you just say that at the start?

Rohan: I did. You were drawing arrows on the whiteboard.

The biscuit is shared. The layout ships with DOM order corrected. Three weeks later, coupon redemption among keyboard users increases measurably. No visual design was harmed in the making of this product.


The dialogue above is fictional, but the design problem is real and recurring. It illustrates something important: when conventions and accessibility appear to conflict, the conflict is often not between the convention and the accessible solution — it is between a superficial reading of the convention (visual position implies tab sequence) and a more precise one (visual position and DOM order are independent concerns). The accessible solution frequently turns out to be the more technically honest one.


When Branding Became Its Own Paradigm

Web design has a peculiar pathology that other design disciplines lack: its technical plasticity created the conditions for idiosyncratic design to be mistaken for identity.

Print designers operate within physical constraints. Industrial designers work against material limits. But the web, by the late 1990s, offered apparently infinite creative freedom — custom typefaces, arbitrary layouts, unusual navigation, bespoke interaction patterns. This coincided with the commercial turn of the web and the arrival of marketing logic into design practice. The conflation was almost inevitable: if branding requires differentiation, and design can express anything, then breaking conventions became evidence of a strong design language.

The result was a wave of products and websites that treated departures from convention as features. Scrolljacking. Hidden navigation. Centered logos, which caused a six-fold increase in navigation failures compared to left-aligned ones. Illegible typographic choices justified as "editorial." These weren't symptoms of malice — they were symptoms of a confused theory of communication: the belief that being distinctive meant being different, at the structural level, from established patterns.

Jakob's Law clarifies why this fails: users spend most of their time on other sites, and they bring those expectations to yours. The more honest framing: branding can and should live in visual identity, tone, colour, typography, and illustration — the surface of the design. Navigation, interaction patterns, form conventions, focus management — this is where shared language lives, and dismantling it for differentiation penalises the users you're trying to reach.

This matters especially for accessibility. As Léonie Watson's analysis of perceived affordances and the functionality mismatch makes concrete: using one element and styling it to look like something else creates a mismatch between the actions people expect and the ones they can actually take. A keyboard user encounters a component that looks like buttons, attempts to tab through it as they would buttons, and finds it broken — because it isn't buttons. The visual convention says one thing. The functional reality says another.

The user who loses is always the one least able to recover from the surprise.


The Chat Interface: A New Paradigm in Formation

We are living through what may be the most significant design paradigm shift since the graphical user interface itself. The rise of large language models has made conversational interaction — the chat interface — a primary mode of software use at scale.

As one analysis of LLM interface design puts it: "Traditional software interfaces were built around commands and functions: clicking buttons, filling forms, and navigating menus. LLM interfaces represent a paradigm shift toward conversational interaction, where the primary mode of communication is natural language." The familiar vocabulary of structured UI — form fields, dropdowns, navigation hierarchies — gives way to an open-ended text field. Instead of a designed path through a system, you describe your intent and the system responds.

This has happened fast enough that the new paradigm's conventions are already stabilising. The scrolling message thread. The input field anchored to the bottom. The typing indicator. Alternating alignment to distinguish user from model. These norms emerged in under a decade and are now expected. Violating them produces immediate disorientation — which is itself a remarkable demonstration of how quickly shared vocabularies form.

The Accessibility Opportunity

From an inclusion standpoint, the conversational paradigm offers something genuinely significant. Blind and low-vision users, who have historically had to navigate layers of inaccessible components, poorly labelled controls, and visual-only affordances, now interact primarily through text — a medium that is, in principle, more equitable. You don't need to find the button if you can just say what you want.

The promise is real. But the paradigm also introduces new exclusions, and the field has been slow to name them.

Language-based interfaces assume a level of written fluency and articulatory confidence that not all users share. The blank text field with its implicit invitation — tell me what you want — is not self-evidently more accessible than a well-structured, labelled form. Users with limited literacy, users who communicate in ways that don't map cleanly to text, users with expressive language difficulties, users experiencing cognitive load — these users are not better served by an interface that demands fluent natural language. The paradigm shift may trade one set of barriers for another.

Neurodivergent users present a related challenge. Many users with autism, ADHD, or anxiety find open-ended prompts cognitively taxing in ways that structured menus are not. "I don't know how to ask" is a real failure mode of conversational interfaces, and it disproportionately affects people who already face friction in other contexts.

A Convention Worth Questioning

Here is a structural observation about chat interfaces that almost nobody makes:

The typical chat layout places the conversation log above the input field. The first message is at the top; the most recent is at the bottom, just above the input. This ordering feels natural to sighted users because they can glance at the whole conversation simultaneously, orient themselves visually, and then look down to the input. The spatial logic is top-to-bottom, like reading.

But for a screen reader user navigating the page linearly, the experience is different. Every time the page updates, the user arrives at the top of the conversation log and must travel through the entire history to reach the most recent exchange — and then further still to reach the input. The most important thing (what was just said, and where to respond) is at the end of a potentially very long journey.

What if the input were placed first in the DOM, and conversation items were ordered newest-first? The most recent context would be immediately available; the user could respond without traversing the history. For keyboard and screen reader users, this could be significantly more efficient.

The counterargument is obvious, and it's the same one that appears throughout this article: this would violate a convention so thoroughly internalised that the disorientation for sighted users would outweigh the benefit. Most users would experience it as broken. The convention would need to change at scale — across many products simultaneously — before the new pattern could be learned.

This is not a reason to dismiss the question. It is an illustration of exactly how paradigm shifts work, and exactly how expensive they are. The current chat convention reflects a bias toward simultaneous visual perception that is so old and so assumed that it has never been named as a convention at all. Naming it is the first step toward imagining alternatives.


What Design Revolutions Actually Reveal

The pattern across these paradigm shifts is consistent with Kuhn's account, but with an important addition. Scientific revolutions, in Kuhn's telling, are value-neutral — the new paradigm wins because it handles the current crisis better, not because it is closer to truth. Design revolutions are not value-neutral. Each one reflects — and sometimes imposes — a particular set of assumptions about who is using interfaces and what they need.

Skeuomorphism assumed users needed physical metaphors. Flat design assumed users could learn to read minimal signals. Material Design assumed motion and elevation communicated hierarchy. Neumorphism assumed aesthetic pleasure was sufficient justification. The chat paradigm assumes users can and will articulate their intentions in natural language. Flow conventions assume users share enough accumulated web experience to navigate predictable templates.

Each assumption is an inclusion claim. And each can be evaluated not against a ladder of design progress — no such ladder exists — but against external criteria: Does this paradigm work for users with low vision? For users with cognitive differences? For users who have never used a keyboard shortcut? For users who do not write fluently in the language of the interface?

As Lea Verou puts it: "Learnability is not an intrinsic property of a UI — it is a function of the context in which it is used." The same is true of accessibility. What a paradigm enables and what it forecloses depends entirely on what you bring to it — and on whether the people making design decisions have thought carefully about who brings what.

Conventions are all we have. There is no external physics of interaction, no rulebook handed down from outside the profession. What we have are shared agreements about how things should behave — agreements that form, dissolve, and re-form as the profession, the technology, and the culture shift. But "all we have" is not the same as "nothing." Conventions can be evaluated. They can be measured against the real and assessable question of who they include and who they leave out.

That's not a design revolution. It's just a standard worth holding.


References and further reading: