Think about the “small” frictions we encounter in daily life: hitting your hand against the door frame every time because of a poorly designed handle, struggling to assemble a complex kitchen appliance, or being unable to complete a transaction due to a confusing bank ATM interface… Most of the time, we label these situations as “user error” and end up blaming ourselves.

In “Design Error: A Human Factors Approach,” Ronald William Day emphasizes that design errors are not merely technical flaws, but the destructive result of neglecting human factors within the system development life cycle (SDLC). Consider Day’s well-known lawn mower example: a designer may create a grass collection bin that perfectly meets technical specifications; however, if the user struggles for minutes every time they try to empty it, the design fails because it does not align with the “User Conceptual Framework.”

When such conceptual failures scale—as seen in the Queensland Health payroll system failure—they can evolve into systemic disasters, affecting 85,000 employees and resulting in costs of up to $1.25 billion.

The Role of Human Factors in Design Errors

The most critical risk in design processes is the “Disconnect Model” between the designer and the end user. Research shows that designers spend more than 75% of their communication with “clients” (often managers who are removed from operational realities), while direct contact with actual users remains below 25%. This “double-blind” situation leads designers to build systems based on desk research and documentation, rather than real-world use.

Another dangerous dimension of this disconnect is the “Automation Myth.” It is often assumed that removing humans from the process will reduce errors; however, as Ronald William Day demonstrates through the “5 Levels of Complexity Model,” the opposite can be true. At Level 4, the interaction between the SDLC, human factors, and budget trade-offs increases risk exponentially. At Level 5, humans are removed entirely from the system.

Automation excels at routine tasks, but in abnormal situations, machines lack human problem-solving capabilities. At this point, the user is reduced to a passive “monitor” who cannot predict system behavior—and in the event of failure, is often labeled a “convenient scapegoat.”

Tragic Lessons from the Real World

The cost of design errors is not always financial—sometimes, it is measured in human lives. The two cases highlighted in the source material serve as critical lessons for system safety professionals:

  • Zanthus Railway Accident: In this incident in Australia, a remotely controlled track-switching system was introduced. However, designers failed to incorporate “Interlocking”—a fundamental railway safety principle in use since the Armagh rail disaster—which prevents a route from being changed while a train is passing. The Human-Machine Interface (HMI) was also severely flawed: buttons were unlabeled, and no written instructions were provided. The result was catastrophic. A mistimed button press by an engineer led to a collision between two trains; 45 people were injured, and 21 had to be transported to hospitals by the Royal Flying Doctor Service.

  • Queensland Health Payroll System Failure: In this case, a system originally designed for a simple structure of 1,800 employees was scaled to serve a complex organization of 85,000. The design concept was flawed from the outset, triggering a cascading domino effect. When the system was launched without proper end-to-end testing, thousands of healthcare workers went unpaid for months. One of the most critical failures was that designers relied solely on managers instead of engaging with actual end users—the payroll staff—ultimately leading to the project’s collapse.

Error Prevention: Hats and 7 Critical Steps

Ronald Day proposes a “Hat” system tailored to the nature of each project to prevent errors:

  • RED HAT (Traditional Models): Used for projects with definitive outcomes, such as bridges or skyscrapers. Requires statistical quality control (e.g., Six Sigma). User input must be captured early—before anything is “set in concrete.”
  • BLUE HAT (Prototype Models): Used to test whether ideas are grounded in reality and actually work in practice.
  • YELLOW HAT (Adaptive/Agile Models): Ideal for software and modern workflows. The user is an active part of the design process.

To prevent errors from entering the system, the following 7 critical stages of design must be managed with precision:

  1. Concept Formation: The highest-risk stage. Any assumption made without situational user knowledge is a potential error.
  2. Specification Writing: Lack of a proper data dictionary leads to confusion and inconsistency in terminology.
  3. Build: Workflow tools must be used with near-religious discipline.
  4. Testing: Unit testing alone is not enough—end-to-end testing with real users is essential.
  5. Implementation: Users should never be surprised; they must be involved from the very beginning.
  6. Training: Not a cost, but a long-term investment in safety and reliability.
  7. Maintenance: Once the system goes live, an “internship phase” begins—anomalies must be detected and addressed immediately.

“Organic SDLC” and Continuous Agility

Day advocates for the “Organic SDLC” (Umbrella Model)—an approach that treats design as a living organism. In this model, stakeholders, designers, and end users function like interconnected organs of a single body; if one fails, the entire system collapses.

In today’s rapidly evolving world, “Continuous Agility” is no longer optional—it’s a necessity. The maintenance phase is not an afterthought, but a critical, ongoing cycle that enables the system to adapt to changing business environments.

5 Action Plans for Designers and Managers

To improve system safety and minimize design errors, the following strategies should be applied:

  1. Include End Users in the Design Team: Participatory design is not a luxury—it’s a necessity. Any design that fails to leverage users’ situational knowledge is inherently incomplete.
  2. Implement Comprehensive Testing Protocols: Don’t just test the code. Validate the entire operational flow end-to-end with real users.
  3. Adhere to HMI and Interface Standards: Avoid unlabeled buttons and complex navigation. Follow established UI guidelines and maintain a consistent data dictionary.
  4. Treat Training as a Safety Requirement: Training is not a budget expense—it’s an insurance mechanism for the system’s proper operation.
  5. Sustain the Maintenance Cycle: Design doesn’t end at launch. Establish continuous feedback loops to identify and resolve anomalies during real-world use.

Design errors are not an inevitable fate. By placing human factors and user experience (UX) at the center of the process, and adopting a disciplined approach, we can reduce costs and build safer, more reliable systems.

One Comment

Leave a Reply