Sunday, October 11, 2009

Interactions and relationships

For the Mx (Managing Experience) 2008 conference, I was asked to do a session that addressed the everyday reality that managers of user experience live in, to reflect on that reality, and to share some approaches and ideas for that reality. I decided to focus largely on some of the interactions and relationships that comprise that everyday reality, but particularly those by managers intent on enabling experience research and design to play a strategic role in their companies. I entitled, the presentation, "Interactions and Relationships."

A description of this presentation can be found in my April 2008 posting entitled, "Realities, dilemmas, framings, ..." Here I provide the slides I used, which are rich with provocative insights. Since I've been asked for these slides a lot...

The first 9 slides accompanied introductory remarks that set the context for the presentation. A particularly important slide includes a collage of photos of the managers and executives who made guest appearances at a multi-week course I taught just prior to Mx '08 entitled, "User Experience Managers and Executives Speak." The course was wonderful, as reflected in the glowing course evaluations, and I decided to provide some of my guest speakers with a bigger stage via my Mx '08 presentation.

Slides 10 through 32 were borrowed from my presentation at a little conference in Rome called, "HCI Educators 2008." These slides address challenges experienced by management and non-management experience design practitioners, and you'll find several slides present words of relevance to these challenges from the guest speakers of my course.

The final 33 slides present even more words from the guest speakers -- words of relevance to examples of the ways these managers and executives have framed such challenges in order to address them. Attendees were asked to consider whether such framings would be beneficial in the companies for which they worked.

Enjoy the slides. And my hearty thanks to the managers/executives who "joined me on stage" both during Mx 08 and my course: Jeremy Ashley, Lisa Anderson, Klaus Kaasgaard, Jim Nieters, John Armitage, Christi Zuber, and Jeff Herman.

P.S. The slides AND AUDIO are once again accessible for a related conference session: "Moving UX into a position of corporate influence: Whose advice really works?"

Saturday, October 10, 2009

Organizational and market maturity

Jon Kolko and I have been discussing whether the pace of corporate adoption and acceptance of comprehensive and strategic designer participation in business has been increasing. Look for a portion of that discussion in a piece we'll be calling something like "On designers as catalytic agents..." to appear in interactions cafe, our conclusion to the January+February 2010 issue of interactions magazine.

While we were having that discussion, Charles Kreitzberg kicked off a short discussion in IxDA's discussion list on what you need to say to a CEO to convince him or her of the need for "user experience design" in a company. As if all it takes is the right collection of words...

A response to Charles suggested that the maturity of the market the company is in is likely to impact the effectiveness of such a collection of words. And though Jon and I were talking about designer participation in a broader sense -- i.e., beyond user experience design, we discussed the concept of market maturity as well as corporate maturity, both of which have been addressed in numerous discussions over the years and for which numerous scales have been delineated. Since many may not be familiar with those scales, I thought I'd point to a few here.

Actually, I've pointed to a couple already in this blog. In "Developing user-centered tools for strategic business planning," I pointed to Jess McMullin's 2005 "design maturity continuum." Jess updated it a tad in December of 2008 and published the image of this version that appears nearby (click to enlarge). In his December 2008 post, Jess points out that his design maturity continuum is actually additive -- each higher level represents the addition of greater responsibility and scope for design.

Most other corporate scales I've seen are not additive but instead describe different stages organizations (or parts thereof) pass through. The first scales of this nature that I ever saw came from IBM Consulting in the early- to mid-90s and were used to rate the "usability management maturity" of their clients. Two of IBM's several scales, which appeared in little blue books they'd give to their clients, appear below:

HCI Resources
  1. Little or no investment in qualified people, prototype/simulation tools, equipment, and/or usability evaluation facilities.
  2. Some qualified people are available. There is limited availability of tools and equipment. A usability evaluation facility is available.
  3. Sufficient investment made in qualified people/tools. Budget for user involvement exists.
  4. Resources are applied effectively at proper stages and levels of the development process.
  5. HCI resources are fundamental to the development process and considered essential in planning product costs.
Integrated Design
  1. Various aspects of the design (panels, helps, pubs, installation, etc.) are designed separately or added late in the cycle.
  2. The need for interdisciplinary design teams is recognized, but efforts are uncoordinated.
  3. Plans for integrated design exist and are executed on a selective basis.
  4. Integrated design teams are normally established. Teams are effective in improving overall usability.
  5. All aspects of design evolve equally and in parallel. Designs provide users with solutions to needs.
In a 1994 book chapter, Kate Ehrlich and Janice Rohn delineated four stages of organizational acceptance of user-centered design. They are described in the table below (click to enlarge) which I took from Timo Jokela's 2001 dissertation.

Variations and extensions of this have appeared in a couple of international "standards," including the 1998 "ISO/DIS 13407 Human Centred Design for Interactive Systems":
0. Need unrecognized
1. Need recognized
2. Considered & encouraged
3. Implemented
4. Integrated
5. Institutionalized
Jakob Nielsen's 2006 version of such a scale -- which I've discussed in two earlier blog entries, including "Changing the pace or course of a large ship" -- combines elements found in all the above scales:
Stage 1: Hostility toward usability
Stage 2: Developer-centered usability
Stage 3: Skunkworks usability
Stage 4: Dedicated usability budget
Stage 5: Managed usability
Stage 6: Systematic usability process
Stage 7: Integrated user-centered design
Stage 8: User-driven corporation
(See "Corporate Usability Maturity: Stages 1-4 and Stages 5-8.")

Other such scales -- older and newer -- exist, but they look a lot alike though they tend to not be accompanied by references to any of the others. One of the more recent examples of these is Forrester's five levels of customer experience maturity, shown nearby via an image from a Bruce Temkin July 2009 blog posting.

Have you found any of these types of scales to be of help to you in places at which you have worked? Have you observed any corporate progressions not addressed in the scales described here that you think should be captured in a scale? (I can think of a couple.)

As for market maturity, the example referenced in the IxDA list discussion should suffice -- the four stages delineated by Jared Spool earlier this year (see "Deriving Design Strategy from Market Maturity: Part 1 and Part 2"):
  1. The Technology is Worth the Pain (such as "when a new product category emerges," there are "no competitors or the users have no choice")
  2. Building Out the Features (which usually happens "once a competitor joins you in a category" in order to catch up)
  3. Focus on the Experience (when "customers stop focusing on new features and start asking for simplicity")
  4. Supporting a Commodity (when "the things we're designing are embedded into bigger experiences")
Do such stages of market maturity trump the delineated stages of organizational maturity? Not at all, but they intersect. Consider both when trying to figure out what needs to be done for designers to be more effective and/or to expand their role in a company.

Saturday, February 21, 2009

Want to increase the strategic relevance of User Experience within your company?

Would you like guidance from a panel of industry experts on how to increase the strategic relevance of User Experience within your company?

Please tell us about the situation where you work and how we can help via responding to a short questionnaire.

With your permission, we might discuss it during our panel session at CHI 2009 in Boston (see "Figuring out the 'one thing' that will move UX into a position of strategic relevance" for more info).

Wednesday, February 18, 2009

User (experience) research, design research, usability research, market research, ...

A version of this post was published in UX Magazine.

I rather miss heading up a user research practice and managing and supporting user research personnel. Recently, I nearly accepted a position heading up a highly-respected user research consultancy looking to take things to the next level.

But should such a practice or offering be referred to as "user research" these days. The term is still in use (though the word "experience" often lies in the middle), but the word "user" can imply a much narrower conception of the practice than often intended. As I described in a much earlier blog entry, that was true when I was Director of User Research at Studio Archetype and Sapient; there, the label did not always communicate that we did more than only research of "users" and "use." And a recent conversation I had with an ethnographer who wanted to better understand "user research," something she said she did not do, revealed such preconceptions still exist even within the applied research community. (Use of ethnographic research methods was a big part of the "user research" we did at Studio Archetype and Sapient.)

In short, it is not always clear what label is best to apply to such a practice or consultancy. It is also not always clear what its ideal scope or focus should be or should become.

Lots of people conduct "usability research" these days, but the methods and approaches often used have lagged behind major changes that have occurred in the world of computing. In "Is usability obsolete?" -- an article we will be publishing in the May+June 2009 issue of interactions magazine, Katie Minardo Scott argues:
"Current usability work is a relic of the 1990’s: an artifact of an earlier computer ecosystem, out of step with contemporary computing realities. Usability can no longer keep up with computing: the products are too complex, too pervasive, and too easy to build. And in our absence, users and engineers are beginning to take over the design process. These trends demonstrate the growing gap between usability theory and commercial practice – the “new realities” of computing haven’t been truly embraced by the usability community. The trends are, at a minimum, making traditional usability more difficult, if not irrelevant in the new paradigm."
The label "design research" is used more and more these days. But when Yahoo! abandoned the label "user experience research" for "design research" two or three years ago, previous efforts -- some of which had been mine when I was in a management role at Yahoo! -- to involve user experience research in the early stages of product and service ideation and conception were undercut. As described by Yahoo!'s Klaus Kaasgaard, guest speaker during a user experience management course I taught last spring, the new label made people think that the research was only relevant to the later "design" phase of the product development process.

The narrow interpretations of the label "user research" at Studio Archetype and Sapient prompted us to extend the label to "user research and experience strategy." The narrow interpretations of the label "design research" at Yahoo! led Klaus to change the label back to "user experience research." But a much more significant change was made at Yahoo! more recently: a merger of the user experience research group and the market research group, yielding an organization named, "Customer Insights."

When I was in a management role at Yahoo!, we discovered that market researchers were encountering some of the same obstacles as our user experience researchers -- obstacles to being appropriately involved upstream in the process so to have a more beneficial impact on the company. So, we began to partner with market research in an effort to attain that involvement. During his guest appearance at my "User Experience Managers and Executives Speak" course, Klaus, now VP of Customer Insights at Yahoo!, spoke at length about the similarities and differences among goals and challenges faced by market researchers and user experience researchers, and about how important the merger has been to achieving such a strategic role. In an excellent article in UX magazine (Volume 7, Issue 2, 2008), Robin Beers paints a similar portrait regarding bringing together the market research and user research teams under the umbrella of Customer Experience Research & Design at Wells Fargo.

Is such a "coming together" of these two disciplines appropriate for every company? No, as implied by eBay's decision to split them up after they attempted to bring them together. There are multiple factors to consider when determining what is best for a particular company. But it is important to understand that great benefit can be achieved when the two work together.

In an October 2008 contribution to Jakob Nielsen's Alertbox, Christian Rohrer provides a mapping of a wide range of research methods, some typically thought of as "market research" methods, that can help you to better understand their similarities and differences.

In the November+December 2008 issue of interactions magazine, Liz Sanders provides different insight via her map of "design research" (see the map below right), which you can click to enlarge). Here is how Liz describes the map's organization:
The design research map is defined and described by two intersecting dimensions. One is defined by approach and the other is defined by mind-set. Approaches to design research have come from a research-led perspective (shown at the bottom of the map) and from a design-led perspective (shown at the top of the map). The research-led perspective has the longest history and has been driven by applied psychologists, anthropologists, sociologists and engineers. The design-led perspective, on the other hand, has come into view more recently.

There are two opposing mindsets evident in the practice of design research today. The left side of the map describes a culture characterized by an expert mind-set. Design researchers here are involved with designing FOR people. These design researchers consider themselves to be the experts and they see and refer to people as “subjects”, users”, “consumers”, etc. The right side of the map describes a culture characterized by a participatory mind-set. Design researchers on this side design WITH people. They see the people as the true experts in domains of experience such as living, learning, working, etc. Design researchers who have a participatory mind-set value people as co-creators in the design process. It is difficult for many people to move from the left to the right side of the map (or vice versa) as this shift entails a significant cultural change."
Yet another map of methods was developed during the Netherlands Design Institute's Presence project during the late '90s. The image to the left (click to enlarge) shows the map, which requires a legend in order to identify which method lies where. In this image, the location of "rapid ethnography" is revealed, along with helpful information about the method regarding required expertise, time, staffing, and cost. (This "methods lab" used to be online, but I am now able to find it only in the 1999 book, "PRESENCE: New Media for Older People.")

The ratings in the above image remind me of ratings developed by Luke Hohmann for individual "innovation games" -- a variety of research methods employing collaborative play. (See image at right for his ratings for a game called Speed Boat, and see "What is holding User Experience back or propelling User Experience forward where you work?" for a sense of what that game is about.)

Tuesday, January 06, 2009

Figuring out the “one thing” that will move UX into a position of strategic relevance

A common question asked of successful User eXperience (UX) leaders is what “one thing” they needed to do in order to move their organizations into a position of strategic relevance. However, the answers often vary (if they believe they've achieved such relevance), posing a challenge to those struggling to figure out how to achieve the same goal where they work.

For CHI 2007, I put together a session entitled, "Moving UX into a position of corporate influence: Whose advice really works?" during which six panelists -- all senior management folks from six very different companies -- argued in support of or against different combinations of five pieces of advice, each of which has been claimed by various people to have been the "one thing" most important to achieving strategic influence. Might one of those "one things" be the "one thing" you should attend to where you work? Might something else be that "one thing"? What is key to figuring that out?

At CHI 2009 in Boston, several panelists will each describe the "one thing" that they think can make all the difference. They will then analyze a variety of scenarios from a variety of companies one by one to attempt to predict the "one thing" needed in each case. And all participants will attempt to elucidate key aspects of the scenarios and the process of analysis to help audience members figure out how to figure out what "one thing" is likely to work in their own situations.

Are you trying to figure out what one thing you need to attend to in order for UX to gain a seat at the strategy table with business and engineering? If so, let us know -- tell us about your work situation, and we'll look into including it among the scenarios to be analyzed by the panelists.

Note that we'll also be addressing a mix of related questions such as what makes a good "one thing," can there really be only "one thing," and just how adequate analyses akin to those to be attempted during the session can be. It promises to be an interesting session.