22 January 2014

Taking the Fast RIDE: Designing While Being Agile

A new year and a new post in my less than active blog, here is an article that originally appeared in interactions magazine.

While many design methods are practiced “in the wild,” the most prevalent one appears to be “Design first and ask questions later”—also known as “Throw it over the wall and see if anybody salutes,” “Launch first, fix later,” and so on. Whatever you call them, these approaches are all responses to the pressure for rapid turnaround man- dated by Agile and other high-speed development environments. These design approaches are all proven methods—that is, proven to create Frankenstein UIs within a mere two to three iterations: That’s speed.
A single-minded focus on speed guarantees that these methods produce poor user experiences, because they do not allow for the reflection and deliberation necessary to achieve high-quality, coherent design. Instead, speed breeds pragmatic short-term solutions. An interaction style gets locked down in the early sprints. Then other ad hoc interaction styles emerge for parts added later. Too much emphasis on reactive speed reduces design to puzzle fitting: How can I put this new square-peg function into my round-hole application?
I am concerned that in an attempt to adapt to the pressure for speed, designers are failing to uphold what we know is good design practice. We compromise too much of the interaction strategy y in order to “ just get down to it.” The end result is competing styles, competing imagery, and unintelligible system/conceptual models.

Agile and the Emperor’s New Clothes

How do you get out of this cycle and still survive in the context of fast Agile or pseudo-Agile (aka reckless) release cycles?
This question may sound odd, coming after years of our learning to live in Agile environments. Agile’s true believers may argue that the whole point of Agile is not about producing speed, but instead about creating a structured process to produce high- quality results in the face of the pressure toward speed (which Agile itself did not create).
But there are obvious incompatibilities between Agile and the needs of good design practice. Unfortunately, an emperor’s new clothes mentality has developed in which it is not acceptable for user experience people to point these out. Instead, we become mock up

monkeys scurrying to meet the next sprint deadline, throwing in features and widgets at whim.
Part of the problem is that Agile’s own best practices are often not followed—for example, and most important, the regression testing, which introduces some flexibility in Agile software development. People tend to claim that Agile is iterative, but without regression testing, it is too often practiced in a piecemeal manner, wherein once something is developed it cannot easily be changed. This makes the emerging Agile practices look a lot like an incremental Waterfall development process. Once sprints begin, the train has left the station. You are then stuck with design decisions made at Sprint 0 or 1, with little ability to iterate on basic concepts.
Perhaps you have heard statements like this or experienced things like this in Agile environments:
  • Sprint 1: “Designer, just do something, anything, right now so we can get some feedback—we can always change it later.”
  • Sprint 2: “Designer, sorry, we can’t make changes anymore—that would affect the back end.”
  • Sprint 3: “Oh, we can’t do that because of lack of resources. You have to stick with your current design. Of course, you can always tack on a new visual design.”


As common and frustrating as this sequence of events sounds, the main point is not just that things keep changing; it is that the serial structure of Agile sprints, which may make sense for engineering, does not fit for design. Often with the best of intentions, designers are either too lazy to push back or intimidated into thinking serially instead of conceptually. This fundamental mistake causes poor design.

Design does not proceed by dividing a complex problem into parts and then working on them sequentially. Design is more of a layered process, moving from broad concepts to devilishly (or heavenly) detailed design. Broad design concepts, overall IA, and key interaction models need to be established first. This conceptual design then guides the detailed designs. Then, as this detailed design progresses, some detailed decisions may fracture the existing overall concepts, showing their limitations. This speaks of the need to supply feedback from the detailed design work back to the underlying conceptual layer.

Establishing a successful conceptual design calls for intensive collaboration between designers and researchers. I would advocate (hence my article’s inclusion in this section of interactions) that in Agile environments, designers and researchers need to be joined at the hip more than ever. They need to work in close concert in order to be as strategically and tactically “agile” as possible.

What About RITE?

Rapid Iterative Testing and Evaluation (RITE) is one of the main ways of trying to incorporate an iterative user-centered design mind-set into an Agile development process. It also joins research and design by coupling testing with quick design revisions. (See Medlock, M.C., Wixon, D., McGee, M., and Welsh, D. The Rapid Iterative Test and Evaluation Method: Better products in less time. In Cost Justifying Usability. G. Bias and D. Mayhew, eds. Morgan Kaufmann, San Francisco, 2005, 489-517.] for more information on RITE.) RITE does ensure some feedback from testing into design. However, in my observation, it does not address the mismatch between Agile and good design practice.
RITE is too reactive and tactical; it also does not address the need to support early, deep design work. It actually separates the testing activity from the interaction design activity by fragmenting the team, and puts both design and research in a reactive, entirely tactical mode. I have seen teams where the usability researchers are consumed with scrambling to plan and set up a test of features on parts of a couple of pages.
Meanwhile the design team is moving on to design some other part. Soon the usability researchers are scrambling equally reactively to test features from the next pages under development. The result is that no one ever evaluates the overall concept or architecture, or even the contextual fit of the application as a whole.
RITE’s testing-led approach to design improves design incrementally. There is nothing inherently wrong with this, as long as one is testing the right things. Unfortunately, this practice will never lead to that, especially as testing will tend to focus on what is being worked on in a given sprint. RITE also can lead to a kind of tyranny of testing.
Testing is an important evaluation technique. But a good researcher knows to mix a cocktail of different evaluative techniques to come up with a far richer view of the system and its user. RITE’s pragmatism never gets to the level of sophistication needed for a holistic design evaluation.

The RIDE Alternative

There are better ways. In this design-hostile environment, I advocate a design method I call Rapid Iterative Design and Evaluation (RIDE). In addition to rapidness, it emphasizes inter play between design and research, beginning with conceptual design, where every evaluation is not necessarily a test. The method also allows for alter native evaluation methods for specific iterations. Moreover, it includes partnering with engineering and product management to rapidly work through multiple concepts. This allows the team to identify the backbone of concepts. These backbones (as opposed to key user stories) get developed/ evaluated first. This places UX strategic design decisions up front, where they belong. While these strategic decisions are still made in the context of a sprint, RIDE respects the need for the layered design thinking needed to design a system holistically.
RIDE strives to do this by:
• encouraging the design team (by which I mean to include ever y- body with design input—interaction designers, visual designers, user experience researchers, and, yes, even engineers) to work faster through collaborating rather than working in isolation;
  • respecting the best practices of UCD/HCI/design; and
  • encouraging the development and evaluation of multiple design concepts at each stage.

The main ingredients of RIDE are:
  • Understanding the product context
  • UX Planning—establishing cross-disciplinary collaboration
  • Defining UX goals
  • Rapidly generating and evaluating multiple design concepts
  • Holistic iteration.

The first two steps here belong in the product-definition phase before the sprint cycles begin. Since Agile promotes fragmentation, a holistic view of the product should first be developed. Unlike a traditional UCD project, the concept is developed in broad strokes, leaving the further definition to the sprints.

Understanding product context 

Understanding product context— taking time to understand the users and usage context.
The product context is an essential element in the definition of the product. This effort, led by product management, includes development, design, and research. It helps clarify the product landscape, the users, and their environment. Design helps to visualize this definition through the rapid sketching of multiple concepts. These concepts are done quickly and iteratively, as more and
more information is gained about the product. The end result is a basis for a product requirements document (PRD) and two to three credible alternative UX directions.

UX planning

UX planning—establishing cross- disciplinary collaboration at each phase.
Design and research create a UX plan, which is meant to span the design and sprint iterations. The plan anticipates the possibility that some research and design
activities stretch over sprints. This is the strategic plan to achieve
the design goal of the product and includes identifying the types of evaluation, research, and design activities that will take place and when. Again, this plan does not follow sprint planning but will take it into account. After every sprint, the plan can be reiterated based on the usually unexpected outcomes of the sprint.

The RIDE UX plan also creates the possibility for each stakeholder to influence, guide, and inspire the design at all levels. RIDE acknowledges that design is not done in a vacuum. The more isolated it is from other disciplines, the more discontinuity and signal interference will arise in the UX. Rather than separating UX design into individual sub-disciplines (visual, information, interaction, engineering, product management, and other stakeholders), these all need to work together, because each part of UX design informs, feeds, and inspires the others.
For example, there is nothing to prevent a researcher or engineer from coming up with a great interaction design solution. Nor a designer coming up with a better analysis of the data. Quite to the contrary—combining their differing perspectives almost guarantees new, innovative ideas. We need not be afraid of a plethora of ideas. Some designers in fact fear engineers making design decisions. Yet if the developer is in tune with the larger design concepts, the chances of their having a good point are significantly increased over when they do not. And further, good designers should be able to defend and persuade stakeholders of the value of their designs; otherwise they might have to face the possibility that they may be wrong.
Outside of these core stakeholders, there is another equally important outer circle. These people include anyone who is taking vital interest in the user experience
and has the power to influence it for good or evil—anyone from developers to marketers to CEOs. Without cultivating their support from the beginning and working to keep them on board, you risk having progress derailed later by random changes of direction. In this planning phase, one should engage them early and on the most abstract level they can stomach: product definition. Get them to wrestle with the big conceptual design choices before development starts. Support from them also strengthens your mandate to execute on the concept.
Of course, nothing guarantees that the CEO won’t insist on purple buttons late in the game; however, in an Agile environment, this is less likely to happen if the stakeholders are on board when the conceptual train leaves the station.
Without this plan, you have to find some way of dealing with these outer-circle UX inputs. Research may give you a chance to resolve design debates objectively, but the Agile timeline typically does not allow for this. The best defense is to make them partners proactively in the design process. Include as many stakeholders early on in brain- storming sessions where the conceptual design is worked out.

Defining UX goals. 

Before the beginning of a sprint, it is important to establish the UX goals. These goals require four types of iteration: product definition, conceptual design, detailed design, and evaluation activities. I don’t mean to suggest that these are serial types of goals; rather, they are all interconnected. In the early sprints, the accent may be on product definition, moving to conceptual design. But even these should be done in service of the detailed design. This way, activities do double duty: iterating the practical short-term goals and informing /evaluating the strategic longer-term goals.
These goals are also planned along three time scales:

• The project end—progress to product release
• The current UX plan timeline— the current planned and ad hoc UX activities
• The current sprint—a time snapshot of the current state of the UX goals.

Rapidly generating and evaluating multiple design concepts.

Designer and researcher work on a coordinated effort at designing, evaluating, and then iterating during the sprint. Many parallel activities are driven by the design strategy, not by a reactive “test and see what happens” approach. It is much different from RITE in that the UX goals will determine the strategy for the sprint. The evaluations may or may not trigger reiteration; they may just inform. They can also split off another design variation for exploration. This depends on the agreed-upon evaluation activities: Are they formative or evaluative? Are they abstract or concrete? Will they help find a synthesis among competing ideas? Researcher and designer are partners in this effort. In parallel, longer and different activities (focus groups, interviews, cognitive walk-throughs, etc.) are being done to triangulate with data from other evaluations.

Holistic iteration.

This involves evaluation at the strategic level in parallel with detailed design. Detailed designs sometimes will trigger refinement of the conceptual design and vice-versa. Moreover, evaluations can touch visual, interaction, information, and system design issues— one cannot predict which. Therefore, many design disciplines involved can lead to vastly different results. For example, when a particular design tests poorly, an interaction designer might change the interaction. A visual designer might change typography, colors, and so on. This leads again to incremental and non-holistic revisions. Real iteration would involve these different design disciplines and find ways to distribute the answer among all the design elements, thereby coming up with a far more robust iteration. Holistic iteration also means assuring the deliver y of what engineering needs to meet their goals and their management objectives. Meeting this need will, of course, mean trade-offs on design. This is why having engineering in the core team is essential. Any development method that ignores or frustrates the engineering team’s management goals will fail.

Conclusion

RIDE is a richer design methodology because it leverages a collaboration of all stakeholders. It encourages exploration through a reliance on multiple design options that are synthesized, as opposed to a single option that is puzzle-fitted with additional features. RIDE seeks to find a holistic solution for Agile design and development, not just a tool for rapid changes to a single concept. But most important, it tries to find the right way for design (interaction, visual,

and information) to work with researchers: joined at the hip. It also requires strength and energy, because--as with any quality UX--there is no free RIDE.

22 March 2010

Prototyping 3: What is a prototype, part 2 other dimensions

In Part 1 of what is a prototype we discussed many dimensions of what a prototype is, but this covers only part of the story, actually half there are many more layers of complexity and if you do not understand them, instead of controlling a prototype, the prototype will control you or worse victimize you. So let’s begin with some prototyping characteristics, for lack of a better term.

Prototyping characteristics


Prototypes have many more important characteristics than just content and fidelity. Knowing what these characteristics are will also help you plan and execute the prototype to the right level of effort. Too numerous to name them all here, here are just a few examples:
Longevity -- what is the lifecycle of the prototype. Is it something to be presented and thrown away, or is it part of an evolutionary prototyping design cycle? How long a prototype will continue to haunt you, should effect how much effort you are willing to put into it.
Stage -- what stage of development is the product? Usually, the more mature the more detailed the prototype should be.
Speed -- how much time do you have? If you have one week, it probably isn’t enough time to make as thorough a prototype as you would like, you may have to adjust your content-fidelity ambitions based on how fast you must work.
Style -- will the prototype be narrative (e.g. demo’d) or interactive (e.g. used). Interactive prototypes are more difficult and time consuming than narrative ones.
Medium -- will the prototype be in a digital media or physical, if digital will it be on the web, mobile or a desktop application, etc.
Being aware of the characteristics of a prototype, empower you to make much more professional judgement as to what kind of prototype you can make.

Defined audience (s)


Audience -- who is the the prototype for? Unlike the end product which is meant for an end user a prototype is meant for certain stakeholders, which may or may not include end users. The prototype should be designed to communicate clearly with the stakeholders. For example, this usually means that a prototype meant for the CEO of the company, will probably look different than a prototype meant for a domain expert

Prototyping tool(s)


Prototyping tools are like tools of the trade, the more you know the better. Likewise for many the simplest software tools suffice for most purposes.
There is no one single prototyping tool that can do everything. Prototyping tools are as varied as there are types of prototypes. Prototypes can just as easily be made in Excel, Powerpoint, Visio, even Word as they can be made in Axure, Dreamweaver, Visual Studio etc.
The point is to match 2 things: First, match the prototyping characteristics with the right toolset. Secondly, of those tools, use the tools you know best. Chances are, your talents in software you know well will outstrip the added functionality of other software tools.
Personally, I no longer use a single tool, but quickly jump between Graphics editors, html editors, scripting tools, layout tools, and yes the occasional prototyping tool.
...but having said that there are some types of tools

  • dedicated prototyping tools
  • Programming tools with prototyping capabilities
  • graphical tools
  • layout tools
  • presentation tools

Dedicated prototyping tools, tools that are only for the creation of prototypes not working software or any other purpose. Examples:

  • Axure
  • Denim
  • Balsamiq


Programming tools with prototyping capabilities -- tools that can create full functioning software, but due to their efficient interfaces can allow users to also create prototypes. The theory, or rather myth is if a designer uses one of these tools, a programmer can take over the design and implement it without recreating it. This is rarely true as the html code, or programming code used by a designer (focusing on visualizing something) is completely different in nature to that of a programmer (focusing on implementing something). Examples:

  • Dreamweaver
  • Visual studio
  • Flash


Graphical tools -- tools that help you create the visuals of an interface, ideal for wireframes. Sometimes these tools can also mimic interaction making them suitable for a variety of prototypes. Examples:

  • Photoshop
  • Fireworks
  • Paintshop pro


Layout tools -- tools that help you layout content. Sometimes these tools include interactivity such as hyperlinks or programming scripts that help create a variety of prototypes.

  • Word
  • Pages
  • Excel
  • Numbers
  • Visio


Presentation tools -- tools that have some built in narrative capabilities that make it particularly (though not exclusively) suited for narrative prototypes.

  • Powerpoint
  • Keynote
  • Acrobat


Method


Prototyping is much more than just wireframes or a ‘dumbed down’ version of real software. The Methods are many, and in addition to the methods below, there are all sorts of hybrid methods which combine features of other methods. Just to give you a flavor here are some examples of some of my favorite methods:


  • Wireframe Prototyping -- A wireframe is a narrative prototype, usually created in the beginning of the design process. This prototype shows high-level sketches, visualizing conceptual assumptions about the product structure and general interaction.

  • Storyboard Prototyping -- A storyboard is a narrative prototype, usually created in the early stages of the software-making process to articulate business and marketing requirements in the form of a usage scenario or story. These stories narrate the user actions needed to perform tasks as specified by marketplace, customer, and user requirements.

  • Paper Prototyping -- A paper prototype is an interactive prototype that consists of a paper mockup of the user interface. The interface is usually fully functional, even if all the functionality is mocked up on paper. Paper prototypes allow you to test a design with many different stakeholders, including end users.

  • Digital Prototyping -- A digital prototype is an interactive prototype that consists of a digital mockup of the user interface. The interface is usually partially functional, even if the functionality is implemented by hyperlinking, screen switching and other methods of mocking up actual interaction. Digital prototypes like paper prototypes allow you to test a design with many different stakeholders, including end users. Unlike paper prototypes, digital prototypes can be tested remotely.

  • Blank Model Prototyping -- Blank models are low-fidelity prototypes produced quickly by user study participants using readily available arts and crafts materials to represent their notions about what an intended hardware/software design could be like. This method is used in the early stages of product design to elicit user perceptions and mental models about hardware form factors and interaction controls in conjunction with a software user interface.


And with the prototyping methods that covers the definition of a prototype. Now was that so painful? Now you understand at least to some degree the richness of prototyping. Instead of being victimized by these dimensions you should be wielding them like a weapon. So hopefully know you can understand the basic concepts of effective prototyping: that a prototype is:

  • purpose
  • content
  • content fidelity
  • requirements and assumption
  • prototyping characteristics
  • defined audience (s)
  • toolset
  • method

If any of these concepts are still not clear, I can discuss them in subsequent postings. Next week I will discuss the so-called benefits of prototyping, which probably could better be labelled: the myths of prototyping.

16 March 2010

Prototyping 2: What is a prototype, part 1

In my last post I discussed what a prototype does. Now here comes a far trickier question: what is a prototype. A prototype turns out to be quite complex, and rightly so. Because to get the benefits of prototyping (the subject of my next post), a prototyper must understand these vital concepts, otherwise you are just shooting arrow into the air.

A prototype is deservedly complex since it is by definition the coming together of many different disciplines. Whether you like it or not every prototype has an either implied or explicit:
  • visual design
  • interaction design
  • technical implementation
  • information design
  • editorial content
  • and my personal favorite: a reason to exist

But those are all vague terms and do not really help you get control of your prototype. And getting control is the point of the definition of a prototype that I want to discuss. This definition will provide you with everything you need to control your prototype, so it does not control you. Likewise, for you non-prototypers, this will also give you enough information to fight what I call the razzle-dazzle effect: a prototyper who over-delivers a slick prototype and uses the wow factor to cover up a paucity of good ideas.

To begin, we need a prototype definition that covers what are the parts that make up a prototype and not what a prototype does (that was covered in the last post).

The Effective Prototyping definition of a prototype


A prototype is a model of a design that is:
  • utilized for a specific planned purpose
  • illustrating specific content and fidelity
  • articulating defined requirements and assumption
  • specified with prototyping characteristics
  • customized for a specific audience(s)
  • created with a specific tool
  • performed in a specific method

Here is a less verbose but more specific version of the same definition:
A prototype is a model of a design with:
  • purpose
  • content
  • content fidelity
  • requirements and assumption
  • prototyping characteristics
  • defined audience (s)
  • toolset
  • method

Below we will discuss them briefly, for more thorough details, you can always consult the full book, Effective Prototyping for Software Makers .

Purpose


A prototype will be created for a specific purpose. Whether it is a proof of concept, or a demonstration of a product’s interaction or a visual direction, it is important to know what the purpose(s) is (are).

Content


Based on what the purpose of the prototype is, you will want to prioritize the content in the prototype.
A prototype consists chiefly of 4 different types of content:
  • Interaction -- how a user will interact with it
  • Visual design -- how the prototype will visually appear
  • Editorial content -- what information will be on the prototype
  • Information Design/Architecture -- what will be the structure of the information

Fidelity


Generally, only in late stages do you want the content all at a high fidelity. Consequently, a prototyper will strategically set the fidelity of any given content higher or lower depending on what they want the prototype to focus on. The higher the fidelity, the more prominent the content. The lower the fidelity, the more the content will fade into the background.
Setting the wrong level of fidelity is the most common error. It results in discussions getting bogged down on visual design, when in fact the interaction design was the only intended goal of the prototype.
Contrary to what most prototyping texts state you can play with fidelity within a content type. For example you can raise the fidelity on the visual design for the chrome of an application and lower the fidelity of the content in order to discuss the visual structure of a given. You can also de-emphasize a content type completely, for example by showing all text in greeked text format you for your audience to concentrate on the visuals or interactions instead of trying to read editorial content which usually grabs their attention.
However, the issue is more nuanced than it appears. For example, let’s say you want to test the interaction design. Then, if you set the visual design level to lowest and editorial content to lowest fidelity, it will be impossible to really test the interaction: you need just enough editorial and visual design content to test the interaction. Likewise, say for example, the visual design is already finished and agreed on by stakeholders, then there is no real reason not to use a high fidelity visual design.
In general, the rule is, lower the fidelity of the content you are both less sure of and do not want to evaluate. At any rate a professional prototyper should be able to justify their choices.

requirements and assumption


The whole point of a prototype, when used as part of a digital product or service creation process is to validate requirements, or rather separate the requirements from the assumptions. Requirements are some function or feature that is necessary for the success of the product or service. An assumption is something that is presupposed to be a requirement, but has never actually been proven or tested. A prototype usually consists of proven requirements, requirements to be validated in the current iteration and assumptions. In general, the higher the assumptions the more risky a prototype is. Whether something is a requirement or an assumption will help prioritize content and set its fidelity.
I see know the post is over 1,000 words, so let’s stop here and resume with prototyping characteristics next week.

07 March 2010

Prototyping 1: What does a prototype do

A series on prototyping
In the 4 years since our book on prototyping first came on the scene there was precious little written about the professional way to prototype. Today prototyping seems to be the hot topic, unfortunately most of the current stuff available on the internet only give an isolated tip or trick. What is especially harmful is that most of these articles rush into how to prototype without really understanding what it is. These works are rife with unquestioned assumptions and and uncritical approach to prototyping.
The book I co-wrote with Michael Arent and Nevin Berger remains still the only thorough attempt to understand prototyping. In the coming series of posts on prototyping, I want to make a compact discussion of what a prototype is and how it works from our book Effective Prototyping for Software Makers . This post will it a little more approachable and if you want the full details, by all means you can buy the book
In this series of posts I want to address 3 broad topics:
  1. What does a prototype do
  2. What is a prototype
  3. Raising the bar in prototyping
In this first post I want to discuss what a prototype does. For that I want to use a definition of prototyping that restricts itself to what it does not what it is. For that I turn to a definition from the book “Universal Design Principles” by William Lidwell and others:
A prototype is “The use of simplified and incomplete models of a design to explore ideas, elaborate requirements, refine specifications, and test functionality.”
For ease of discussion, I will break this definition down into its components. First, I will throw out the models business because that goes into what a prototype is, which will be the subject of the next post. That leaves us with the following uses of a prototype:
  • to explore ideas
  • to elaborate requirements
  • to refine specifications
  • to test functionality
All prototypes attempt to do at least one of the above purposes and usually more than one simultaneously, often without the prototyper even being aware of it. What is essential to know is that the prototype is first and foremost a communication medium. A prototype communicates the above 4 concepts.

Explores ideas

Here the accent is on if the idea is desirable. Prototypes are at their best when they explore abstract concepts or ideas and makes them concrete. It is easy enough for a group of technocrats to discuss their new idea for a killer document registration, yet being able to both rapidly and interactively visualize with a prototype makes the idea come alive and often inspires and informs the whole ideation process. Any software idea can be visualized with a prototype. But here are just a few examples (a fuller list comes in a future post discussing prototyping content):
  • Interactions design
  • Application functionality
  • Visual design
  • Information design/architecture
  • Rough concepts and ideas
Among the many means of using prototyping to explore are:
  • a single prototyper visualizes the idea
  • a group prototypes through participatory design practices
  • members of a group each sketch out their ideas as a group
  • a group brainstorms a prototype with a designer as facilitator

Elaborates requirements

Here the accent is on, whether the prototype is possible. A prototype elaborates requirements by often illustrating what is necessary to actually put an idea into action. For example, and idea of having a running total in web interface when illustrated will make a developer realizes they need Web 2.0 technology. Or it could make the business analyst realize that discounts or other items that effect the total also need to be known upfront or somehow communicated to the end user. Once an idea is explored, software makers often look at a prototype differently. Among the types of requirements that are elaborated by a prototype include:
  • Business
  • Organizational
  • Functional
  • Technical
  • End user

Refines specifications

Here the accent is on, whether the prototype is feasible and if so how. One the idea is desirable and deemed possible, then the detailed design comes in. A prototype is often a superior form of specification than a large paper document with lots of verbiage where the requirements are difficult to ascertain let alone visualize. Furthermore, the prototype speaks in the visual language of the product itself and cross cultural and language concerns are not so big an issue with today’s global development teams.
A prototype can be stand alone documentation if it is a totally complete model. Otherwise, often some form of annotation or some lightweight document is needed to accompany it.

Tests functionality

Here the accent is on evaluattion, for example whether the prototyped design is usable for the end user.
A working prototype (paper, digital or whatever form) can be shown to stakeholders and they can test it to see if they can work with it. If it works the way it should or they way it needs to. This way corrections to the design will cost no redevelopment costs.

Summary

So in a nutshell, this covers what a prototype does. In essence it communicates four things:
  1. to explore ideas -- is it desirable?
  2. to elaborate requirements -- is it possible?
  3. to refine specifications -- how do we do it?
  4. to test functionality -- does it work?
How and what it communicates will be discussed in my next post on what a prototype is. In that post I will discuss the characteristic parts of a prototype. Understanding these characteristic parts allows you to control how your prototype will come across to your audience.

02 March 2010

UX Strategy III: The UX Declaration of Independence from Engineering

(Co-written by Thomas Jefferson, I hope my non-American friends will indulge an American metaphor of the declaration of independence.)

When in the course of human events, it becomes necessary for one profession (UXD) to dissolve the bonds which have connected them with another profession (Software Engineering), and to assume their own powers, separate and equal from other professions to which the Laws of Nature entitle them. A decent respect to the opinions of software engineering requires that User Experience should declare the causes which impel them to the separation.

We hold these truths to be self-evident, that all products are endowed by their creators with user experiences with certain unalienable Rights, that among these are usability, satisfaction, and business feasibility. Furthermore the user has a right to a user experience which is derived from the entire company/organization not just what is technically feasible at a given moment.

That to secure these rights, User Experience professionals are engaged by companies and businesses. These professionals derive their just powers from a professional integrity that must not be compromised, otherwise the User Experience design loses whatever rights they have to exist.

Software Engineering as a process has had a tyrannical effect on the User Experience professional, forcing them to shorter and shorter deadlines with less and less available resources that the point is reached that UX Professionals often find themselves going through motions rather than truly designing professional products the way they are truly capable of creating them.

The history of Software Engineering processes are a history of repeated injuries and usurpations of UX terrain, all having in direct effect the establishment of an absolute Tyranny over this profession. To prove this, let Facts be submitted to a candid world:

  • Development methods are continually shortening their design process and their delivery deadlines

    • This makes it impossible to do a thorough and adequate design process, forcing us to take all kinds of irresponsible and inappropriate short cuts.
    • Specifically, Agile development processes attempt to preclude any upfront design or research as good UX processes demand
  • Development do not use UX metrics as a measure for their success
    • Consequently there is no business case for following UX best practices
  • Development keeps the UX bar purposefully low so that UX accountability is
    • non-existent -- even when it is clear that products are failing because of their poor user experiences
    • an afterthought -- the product is a success or failure and after the fact UX is blamed or ignored
    • an anecdote -- the arbitrary story or urban legend of use becomes definitional for the user experience
    • unprofessional -- as long as the bar is low, poor UX design will yield equal results making the establishment of UX best practices very difficult
  • Development’s near fetish-like fascination with a release puts artificial blinders on the UX processes, resulting in:
    • assuring structurally sub-optimal results
    • cutting corners when it really is not necessary
    • giving undue credence to an artificial argument against UX additional processes
    • obscuring the value of user experience design by forcing it into the release focus of software engineering.
  • UX quality is now reliant on the kindness of strangers, that will say the extent to which a Software Engineering team is or is not enlightened to the value and processes of User Experience Design.

We, therefore, the Representatives of the united User Experience Designers hold that instead of working under the hegemony of engineering, User Experience activities should work in coordination, not in tandem with Software Engineering.

Among the ongoing process which User Experience should be working on independent of Software Engineering include (partial list for the longer list of UX processes see the previous post in this blog):

  • User Research
  • Design Research
  • Requirements gathering (SE’s are needed for technical requirements but that is only one part of the whole requirements picture
  • Product design
  • Conceptual design which may cover multiple products/channels and multiple releases.

Places where software engineers and user experience should closely together is

  • translating a conceptual design to a specific product release cycle
    • product definition
    • product detailed design
    • product design reviews and iterations
  • mentoring developers through a product release
  • evaluating software engineer work for fidelity to UX concept using appropriate UX metrics
  • release planning

Software engineering in turn should act as mentors in the UX processes assuring technical feasibility for short and medium term are tracked and noted. In this way Software Engineering, Product Management and User Experience are truly equal partners in the creation of great products and product experiences.

Signed 2 March 2010

16 February 2010

UX Strategy II: About the iterative diagram: What is it?

In the second part of this Strategy discussion, I will concentrate on the Strategy diagram from the previous post. This post will cover what the diagram is and who is it for. There are more issues than that to be complete, but I can always add an additional post if there is a desire to read more detailed information about it. [Note this post, like all my posts are revised based on user comments or feedback.]

Just to review from my previous presentation (see post below): this diagram is a way of anchoring the design process to key strategic activities thereby assuring both a true design process as well as a strategic execution of this User Experience design process. The alternatives that are in vogue now are either
  • seeing the User Experience as a bolt-on to engineering processes
    • 'Bolt-on' being American for: just embedding a UX process in to a software engineering process
    • A software engineering process which is already cumbersome and unpredictable
    • In general adding design process to software engineering process is like forcing the square peg into a round hole.
  • or at best its own independent process that mimics a software engineering process

Where the UX process eventually turns into something that looks like some variation of
  • a waterfall
  • incremental design
  • Some other variation of the straightest line between two points approach

The above points coupled with my belief that software engineering process is a contradiction in terms pleads for the necessity of this new diagram.

Figure 1: the UX STrategy Iteration Diagram

In general terms you can think of the diagram as a planning tool one can talk over with a program manager or client or even all key stakeholders during a workshop. You can also think of it like a hula hoop, somewhere, anywhere in the hoop you can cut it flatten out and make a project manager or software engineer happy to see a simplified overview of what activities you are going to do for the current cycle.
These diagrams can be stacked on top of each other and connected at key points to plan multiple user experiences among different channels, products or services. This would allow planning and illustrating hos a mobile product project can inform a web application project. Likewise a strategy iteration can inform a tactical one, etc.
The strategy diagram and the planned activities should be revisited after each activity and see if it assumptions are still valid or if it is time to iterate the activities. In this way the very strategy is iterative just as the User Experience. But before going into too much details, I want to discuss two points here:
  • What is the diagram
  • Who is the diagram for

What is this diagram


This diagram is an attempt to create a model for User Experience Strategy and in so doing create also an instrument for both:
  • understanding User Experience Strategy
  • planning an User Experience project for your
  • company organization
  • or heaven forbid for a client if you are one of those charlatan UX consultants like me.

The diagram consists of the following (names are provisional):
  • Circles
  • Elements
  • Activity
  • Properties


Circles


The circles represent iteration cycles. But iterations are centered on an element or two or more, but they have iterative effects also on its neighboring elements and then even ripple effects through the entire UX element landscape (see below). Even when an iteration confirms an already existing UX element it still strengthens that element and thereby changing it. The circles show the interdependent nature of the User Experience as an expression of a series of elements.

Elements


The elements are a major area of the User Experience, usually with one or more associated deliverables. In order to qualify as a major element in the User experience it must meet the following criteria”
  1. Plays an essential role in UX products, services, and other expressions (brochures, ads, etc.).
  2. Major risk to the resulting product and/or organization if this element is not ready.

With this definition it speaks for itself that each project/company/organization may have a slightly different diagram, but complete coverage is essential.

We (my colleagues at Stroomt and the helpful people who kindly mailed in their suggestions) identified a generic set of UX elements, namely:
  • Mission Statement
  • Vision
  • Goals and principles
  • Channels
  • Brand design
  • Business Case
  • Business Plan
  • Requirements
  • Define product/service(s)
  • Conceptual Design
  • Detailed Iterative Design
  • Evaluate and refine design
  • Release product and plan for next iteration


Each of these elements must have a sufficient level of maturity and stability in order to release a product or service to the world. The User Experience Strategist is obliged to review the state of each of these elements. It is not the job of the User Experience Strategist to be the person who delivers or executes on these elements, UX is by nature multi, or I would say macro-discplinary. The UX Strategist is a facilitator first and foremost.
Figure 2 UX Strategy Diagram with activities

Activities


If these elements are not in an acceptable state then activities should be planned to bring them up to the appropriate level. It is not the User Experience Strategists job to perform all of these activities, or even any of these activities. Like the elements, the activities also require many different disciplines. The UX Strategist may be able to assist and find and support the right people to perform the activities. However the strategist is primarily concerned that all the information is available, up to date, stable and mature.

Properties


Both activities and elements have properties, these depend on the need of the organization, but can include things such as:
  • Staffing
  • Resources
  • Start and end dates
  • Deliverable requirements
  • Budgets
  • Etc.

Who is this for: Multi-disciplinary vs Macro-disciplinary


Last topic for this week: Who is this diagram for?
Well definitely not for the faint of heart.
The User Experience Strategist, Designer, Project Sponsors, Program Manager, Project Manager are those most to gain from getting this overview as well as wanting to be able to plan on a macro level. But the fact is, this is one way of getting all of the multi, or rather Macro-disciplinary team literally on the same page about who is doing what and how it all fits together.
I use the term Macro-disciplinary because unfortunately too often the word multi-disciplinary is bandied about to mean multiple disciplines without recognizing that these are mostly separate people. Most UI multi-disciplinary projects means the designer--or whoever the UI one man band is called-- is up late at night and weekends. They are also often caught talking to themselves in a desperate attempt to bring in another disciplines or perspective into their work. By Macro-disciplinary I want to show that it is impossible not to include many talented people with many complementary, but more often contradictory perspectives.
This last concept: contradictory perspectives is essential to every successful design project I have ever worked on. This diagram allows these contradictory perspectives to elegantly be laid plain in a map. It also allows you to plan activities for incorporating those perspectives back into the larger UX iteration so contradiction are resolved rather than brushed under the carpet.

Next week the UX Declaration of Engineering Independence.

09 February 2010

UX Strategy is different than UI strategy Part I

[Note: First of three parts.
Next post Part II, a detailed discussion of the diagram below.
Part III. the UX Declaration of Independence from Engineering.]

Here is some big news: UX strategy is not UI strategy. This must be big news since the two seem identical in how they are practiced. There seems to be a fundamental flaw in our ability to make a difference between UX practice and UI practice. However, there does not seem to be a shortage of differences between defining the two that is covered and almost written to death (so I won’t cover it here if you are interested in that wikipedia is a fun to place to start). Yet when the rubber meets the road, most strategists, designers, usability engineers and other nefarious UX practitioners like me, explain a process that looks awfully similar to UI design best practices anno 1989.

So here is some other big news, amazing news for everyone in the UX business: design is not engineering. What? You knew that already? That is strange since I am yet to see a single UX strategy or UI process that is actually iterative let alone independent of a development cycle. Oh I am sure they are out there, its just the secret sauce of a chosen few who really get it right? Not likely. It seems to me that most people who claim to be UX designers are in fact UI designers.

That fact is so few people understand UI design that no one really notices when it goes by another name, especially one that sounds more expensive like UX design does. The reality is UI designers have nothing to be ashamed of: it is one of the most difficult and nuanced professions due to its inherent multi-disciplinary focus. The inclusion of Interaction Design, Information Architecture, Graphic Design, User Research etc is all classical UI design not User Experience design. Because of this confusion, too many people are spinning their wheels in UX design when they are really discussing the important and essential issues around UI design. Here I would like to discuss my take on the practice of User Experience design.

Here is the good news: there is a solution
I recently gave a talk in Utrecht UX Strategy. I include the slides with this post.



I think my most important point in that talk is a real iterative UX Strategy that is based on Design practice not software engineering practice. A subsidiary thesis to that talk could be: if you are fixated on how UX fits in a development method (e.g. Agile, RUP, Waterfall, etc.) than you are not a User Experience Designer at all but a UI Designer. There is no shame in doing UI design but then let’s not muddy the UX waters with it.
Moreover a real UX strategy should not only not resemble an engineering process, but should also be independent of it. To not accept this reality, is to concede the hegemony of engineering in both process and decision making on UX. That hegemony is not the reality except in engineering driven companies. UX is inherently strategic, whereas UI design is inherently tactical requiring a close association with engineering toward realization. Perhaps it bears noting that not all User Experiences have UI’s or have UI’s as their most important component. UI Design is in fact the place where Engineering and UX meet but UI design is not the end of the UX strategy, it is rather one of its many expressions.
The presentation above identified 3 goals of UX Strategy, they are not the three goals rather just the three I am concerned about:
1. Keep the client/business/organization focused on their business goals
2. Keep the Design and Technical teams focused on the Conceptual Design
3. Provide a predictable repeatable process
4. Maintain a UX reality check that is at once iterative, open-ended, and reliant on solid analysis (some may call this trusting one's gut) as any good design process should.

A good UX strategy is therefore better represented by a loop, as it is in the slide presentation above. A loop unlike any engineering process. A loop with no beginning, because we invariable enter at some arbitrary moment: when we contracted, when we were hired, etc. It reflects reality that we start anywhere in the process and it also reflects the interdependent nature that one step will invariably influence the other, if not for this product than maybe the next. Moreover there maybe multiple simultaneous iterations occuring at the same time.
A much improved version of the image from the presentation would look like the image below. This is what looks like a ferris wheel approach. Each node on the wheel below represents a common analytical element of a User Experience. This is a 1.0 release so hopefully some helpful comments will come forward and allow me to iterate on it.
This image recognizes the interconnectivity of an organization to the user experience and the ripple effect of one UX element will have on the rest. One error in the drawing is that everything appears to be the same weight and magnitude, which is not true. A gear like metaphor would be better. Each element, mission, goals, etc. could be represented as a gear that turns the larger iteration gear. A gear, whose size could change with each of the UX elements could have a bigger or smaller gear depending on the character of the company or organization.

Each element can have a series of activities associated with them. These activities will help continue the iteration cycle. The activity can vary with organization, its needs and (the weak link in the chain) the talents of whom they hired to iterate the User Experience.
Another important aspect to the drawing is the iteration cycle does not end. There is no ultimate goal with which life starts and then ends. The reality is relases/successes are temporary and no sooner is one goal achieved than the next goal must take over, the next quarter’s numbers must be met, the next new thing must be created to stay ahead, etc. In this way the UX evolves throughout the life of the organization.
A few examples of a chart completely filled in are given below. The first example is a complete cycle refresh. The next one is a product oriented iteration. The last one an organizational iteration with a proof of concept product at the end of the iteration cycle.
After the images below i welcome your comments. I will refine the presentation and the drawing (maybe a good visual designer would volunteer?). The the drawing will start to live, and will it ever be finished? I hope not, or we will be out of a job.
UX Strategy wheel template


UX Strategy wheel completely filled out



UX Strategy wheel completely filled out for a product oriented iteration


UX Strategy wheel completely filled out for a company iteration with a proof of concept product or service

20 October 2009

Agile & UX

The presentation below started out as a short talk at the UX Cocktail Hour. However, the presentation has been gaining in popularity. So I decided to post it here. Because it does not really stand alone to people not at the presentation some misunderstandings have resulted. I will record the presentation with audio, but in the mean time, let a few words here suffice:
It is a design-centric view of Agile, or rather a War weary design centric view of agile.
The main point being that Agile is not a development method as much as it is a way of setting aggressive deadlines. What happens in one Agile project does not predict success for another. Instead the designer needs to be Agile in figuring out how they can best fit in. However, that agility should not extend to your design process. Designs still need to be well thought out concepts not something grown together in piecemeal increments.
The bottom line message (found on the last slide) is to be truly successful in Agile, you need to follow your own design process but be intimately involved in the Scrum process, preferably as a Scrum Master. This is essential for maintaining an overview of what is going on with your design. At the very least, own the user facing stories/requirements in the product backlog.
And Sherlock Holmes and Dr. Watson were meant to personify regression testing.

01 October 2009

Halcyon days at the EuroIA Conference

Last week I attended the EuroIA conference. I was there primarily to give a talk with my former Google colleague, Greg Hochmuth, on a project we did on on-line privacy. To be honest I had low expectations for the conference, thinking it was not going to be very professional. That was my estimation of the IA movement in general. I favored the more rigorous CHI model. This reliance/faith in CHI is why I have been working so hard to bring practitioners into CHI with the design track work and of course the DUX conference series, etc. I assume that CHI was where the interesting professional UX work would be done. I did not expect any such thing at an IA conference, which I thought was too narrow and too niche to be interesting.
I was wrong and closed minded, both of which I find annoying.
I was quite surprised to attend a very fine conference with a strong practitioner focus with competent representatives from industry giving case studies and thought provoking discussions. There were, of course, more than a few missers. However, when you attend a CHI conference misser you really wasted your time at some inapplicable pedantic presentation. These were all interesting even if not earth shattering.
I was also pleased to see that the attendees had a kind of willful confusion of IA with UX. Eric Reiss one of the leaders in the conference series said early on he was proud that they would have no debates on terminology or definitions.
What is IA
It seems to me that IA (Information Architecture) and HCI (Human-Computer Interaction) are two ways to achieve the same effect. One is information driven, the other is interaction driven. Both strive for but don’t quite achieve UCD. To borrow a Mahler analogy, these two movements seem to dig from opposite sides of the mountain to reach the center.
Setting the stage for the conference was an interesting case study keynote given by Scott Thomas on his work for the Obama presidential campaign web site. A refreshing talk, one would probably never hear at CHI, charting the work he did as both designer and web developer and IA for one of the most successful and high profile web presences.
It was clear at the conference that there are those who do specialize in IA and don’t touch interaction design with a ten foot pole; however the majority seem to blissfully switch between IA, ID, and UX designer labels based on what will get them the job or the most influence. The resulting conference content is interesting and competent, usually not pedantic (there were a couple regrettable forrays into pedantia--oh I am being pedantic aren't I?). I will hasten to add that probably 10% of these presentations would have been accepted at CHI.
CHI Bashing
Not that I am in anyway bashing CHI (well I guess i am sort of). CHI continues to be dominated by Academia, it is its reason to exist. So it makes sense that more practioner oriented organizations thrive and offer better conference experiences like EuroIA, SXSW is another such conference. However, there are some design heavy weights very active and present at CHI. People like Bill Gaver, Bill Verplank, Bill Buxton--hey are all of them named Bill? So I guess we should also include Bill Card and Bill Dray...
Still going to a CHI conference is daunting and if you do not stick to the Design or practitioner focussed papers it is really hit and miss. Then there is also the unfortunate academic who strays into a design paper and lambastes a practitioner for not holding double blind studies on a project with a limited client budget. Ah, it is always embarassing when people can't check their egos at the door.
So, it is good there are several credible alternatives to CHI. I guess this means I need to attend the next IA Summit and see what that’s all about. I don’t think I can take anymore good stuff...
This profession
In the end, I had a friendly familiar feeling at EuroIA. A feeling like I had met these people before. It seems that regardless of whether you are at CHI or EuroIA or UPA or wherever, people of our profession(s) share this common empathic passion for our stakeholders. This makes us a particularly caring and sympathetic tribe.

04 September 2009

Measuring the User Experience

This weeks post is a review of the book Measuring the User Experience by Tom Tullis and Bill Albert. From time to time other book reviews will follow.

Why a book review

The current state of books on UX is deplorable. Many UX books can’t make up their mind if they are about a given subject or the UX world according to Garp. Just looking at my UX bookshelves, I notice there are, for example, many books with authors who have a narrow or focussed expertise. These authors write books supposedly over a narrow subject, which they sustain for about a chapter or two before they deteriorate into their own homemade version of the User Centered Design process that has little if anything to do with the subject of the book they intended to write. The result is a book with grains of truth in a stew of platitudes. A review of just three books one claiming to be on prototyping, one on designing and another on UX communications, reveals that all of these books cover more or less the same material such as user research, task analysis, persona’s and prototyping; but it does it in such a way that they use both conflicting terminology and conflicting methods.

My more ideal UX books are those on a subject and stick to that subject. They explain their topic in a way that is process independent so that they can plug into whatever processes companies or organizations utilize. The fact of the matter is that no two organizations adopt the same software development process. What they all have in common whether they are called agile or waterfall, iterative or serial, is that they are all machiavellian. Therefore if a book's material cannot fit into the current machiavellian software development processes, then the book is largely worthless; even if entertaining (though probably not as entertaining as E.M. Forester).

I think one of the best services I can do then is to help people navigate around these literary cataracts and start a series of book reviews. These reviews will try and highlight the best of the UX literary corpus.

Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics by Tom Tullis and Bill Albert

I want to start with one of the brighter lights in our industry Tom Tullis. I have often wondered why he had not earlier written a book, given the high quality of contributions he has made to our profession. Well the wait is over.

It's true it is a book on usability metrics. Now I realize there are some people who hate metrics. These people particularly hate any accountability for their design work. I can’t tell you the hate mail i received, even from large design firms, when as Interactions Editor we did a special issue on measuring usability that was guest edited by Jeff Sauro. Well, I purchased Measuring the User Experience (MUX if you will) expecting a more thorough version of that special edition that went into the statistical significance of usability testing. I was in for a very welcomed surprise: this book does not just cover summative usability statistics but many different ways to collect user experience metrics and the also discuss proper analysis techniques.

The book empowers the user to make the right decision regarding what methods you can use and what you can expect the metrics to be able to tell you or not tell you. As the book states metrics can help you to answer questions such as:

  • Will the users like the product?
  • Is this new product more efficient to use than the current product?
  • How does the usability of this product compare to the competition?
  • What are the most significant usability problems with this product?
  • Are improvements being made from one design iteration to the next?

This is a refreshing change from just looking at time on task, error rates and task success rates. Though of course these play a role they are but ends to the means of answering these larger questions. Furthermore, the book also points out that there is also an analysis step that can greatly alter the seemingly obvious findings.

I cannot tell you the amount of time and money I have seen wasted as perfectly reasonable and wonderful user research was conducted, only to have its results obfuscated and mutilated beyond use. This book will not just enable the usability tester or researcher to avoid such mistakes it also empowers a project manager to see to it that a development project designs the solid usability study that will fit in the goals and needs of the development team.

In their discussion of designing the right usability study. The authors guide you in choosing the right metrics.

First you need to establish if the goal of your study is what the goal of the user’s are. Then on that basis you can look at which metrics, the authors identify 10 common types of usability studies:

  1. Completing a transaction
  2. Comparing products
  3. Evaluating frequent use of the same product
  4. Evaluating navigation and/or information architecture
  5. Increasing awareness
  6. Problem discovery
  7. Maximizing usability for a critical product
  8. Creating an overall positive user experience
  9. Evaluating the impact of subtle changes
  10. Comparing alternative designs

Then, a key issue they discuss is looking at the budgets and timelines, aka, the Machiavellian business case for the study. Then you can tailor the type of study: how many participants, will it be tests, or reviews or focus groups or a combination thereof.

In the conduct of these studies it is also important to track the right metrics. Tullis and Albert identify the following types of metrics:

  • Performance Metrics -- time on task error rates, etc.
  • Issue-based metrics -- particular problems or successes in the interface along with severity and frequency
  • Self-reported metrics -- how a user can report their experience with questionnaires or interviews
  • Behavior or physical metrics -- facial expressions, eye-tracking etc.

It handles these metrics as they should be as part of an overall strategy not favoring one over another as being innately superior. All too often usability testing consultants are one trick ponies, prisoners of whatever limited toolset they happen to have learned.

This book allows the user to assemble all the needed metrics across types to achieve a more holistic view of the user experience, or at least sensitize them that they are not looking at the whole picture.

What is also amazing is the focus and discipline in the book. I think many other authors would not be able to fight the temptation to then expand the book to include how to perform the different types of evaluations, usability tests, etc. These authors acknowledge there are already books that cover these other related aspects and keep their emphasis purely on the subject matter of their book: measuring the user experience.

Yes the book does also get into statistics and evens hows you how to do simple straightforward statistical analysis using that panacea to the world’s known problems’ excel (but that is next week’s topic).

And just in case your wondering the usability score for Amazon is 3.25, while Google’s is 4.13 and the Apple iPhone is a mere 2.97. While the web application suite I just finished designing got a perfect 4.627333.

29 July 2009

Confusing a Heuristic with a Moral Imperative

Heuristics are excellent assistance in identifying potential problems with a given user interface design. The trouble lies when people come to rely on these as the sole input, that somehow they can come and overtake the more rigorous and far more accurate methods of evaluation. So please don't read below as being anti-heuristic but rather anti-misuse of heuristics.
I have been working more and more with consultants and pseudo-designers who have been working on evaluating web applications with a ton of heuristics in their hands. I can hear them clear across cubeville with clipboards in their hands:
"This is terrible, you are inconsistent between these pages, those pages ignore web standards, these other pages behave differently than the others, and oh my gosh look at all these unnecessary graphics, rip these all out. Get rid of the background coors, and ugh those button colors!"
Concept and user groups can trump heuristics
The fact is there could be a valid reason for violating every single one of these heuristics. Worse yet, there are these type of evaluators who without so much as learning the context go in a tear apart a site for violating, standards, UI conventions and other heuristics of all sorts.
A well defined and innovative concept will often require breaking a few rules. Moreover, if a concept is tailored to a specific user group, who is not the evaluator, then all the heuristics are almost invalid.
Heuristics are defined as (according to my Mac dictionary, and why should we doubt Apple?):
Heuristic (/hjʊˈrɪs.tɪk/) is an adjective for experience-based techniquesthat help in problem solving, learning and discovery. A heuristic method is particularly used to rapidly come to a solution that is hoped to be close tothe best possible answer, or 'optimal solution'. Heuristics are "rules of thumb, educated guesses, intuitive judgments or simply common sense.


Well here are some of these so-called common sense rules of thumb with some food for thought to think about along side of them, I am using the list from Jakob Nielsen's site, just to pick 10 basic one (http://www.useit.com/papers/heuristic/heuristic_list.html) This is not to pick on Jakob, as the point here is to discuss the pitfalls when heuristics are used as the sole means for evaluation, as such every heuristic can be picked apart and discredited, these are just 10 examples:

Heuristic Justification (From Nielsen) Yes but...
Visibility of system status The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Maybe the user doesn't and shouldn't care. This heuristic assumes a user population actually cares about what is going on. Many user's could careless unless it's going to cause them a problem. You should have some basic trust built with your users and that trust may mean only informing them in the case of a problem, or handling the back end status problems yourself.
Match between system and the real world
The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
Unless the purpose of the site is teaching the user a domain, or new task. An example would be Google Ad Words, where a novice user does need to learn some basic Advertising terminology or the advanced features will be lost on them.
User control and freedom
Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
This heuristic seems to justify poor design. User control and freedom come more from safety is more than just redo or undo, its the ability to let the user explore and play around with the system. This is done through facile interaction design, a heuristic I have never seen listed.
Consistency and standards
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
This assumes 1. the user has no other reference point than platform standards 2. the platform has standards or usable ones
Again this justifies lazy design. Standards are a fall back (I say this as someone who has written UX standards for 3 major software companies); the conceptual design should be leading.
Error prevention
Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
Here is a useless Heuristic. What is an error? One man's error is another man's exploration. Maybe you should enable errors?
Recognition rather than recall Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
Indeed the memory load should be lightened for a user. However the better way to do this is to employ well established visual and interaction patterns. Worse, this explanation can be very misleading for the naive reader. Indeed I have experienced many a designer and developer use this heuristic explanation to 1. attack progressive disclosure and 2. To create a ridiculously busy screen throwing all functionality with equal visibility into a "one-stop shopping" kind of screen. Or worse a screen with a huge amount of text explaining how to use the screen. All of which are from a cognitive ergonomic perspective completely unusable.
Flexibility and efficiency of use
Accelerators -- unseen by the novice user -- may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
Building in redundancy to support multiple styles of interaction would be a better way of putting this. However, this needs to be seen in the context of a broader design concept. For example, there is often this designer fetish for drag and drop, when often it is only the designer who wants to perform this action. Also, implementing drag and drop in one place, invites the user to try it everywhere and very annoying when it does not work as they expect it to. So pick these accelerators well and not just for their own sake.
Aesthetic and minimalist design
Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
The explanation is here at odds with the heuristic. The heuristic seems to cry for everything to be a Scandinavian styled minimalist design; whereas the explanation goes on about text.
The visual design should leverage the brand and ability to communicate. Gratuitous graphics are supposedly bad unless the delight the target users (think of Google's doodles on their home page).
As far as minimalism, I recall Tufte who said anyone can pull information out, how you pack information into something and keep it intelligible and usable is the real challenge.
Help users recognize, diagnose, and recover from errors
Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
My only problem here is the "precisely indicate the problem." I am sure Jakob did not mean go into gory technical details of the problem, but rather concisely describe the issue. E.g. "Your data was not saved." not "Your data was sent to the application layer and experienced a time out longer than 3 ms and the system sent back the data in an unusable format."
My formula for error message writing:
"Short sentence what happened. (forget why) Short sentence how to fix it. A link can be added "Learn more" or "Why did this happen to such an undeserving user as me" for the morbidly curious.
Help and documentation
Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
Far from apologizing for help we should revel in it.Help and documentation should be electronic and in context. For example, micro help (a question mark icon or What's this link which work on mouseover or a small popup) often assist the user without interruption.