02 December, 2015

User Experience Design & other Overlapping Disciplines



This article was originally published on Test Insane website.
The term user experience was coined by Donald Norman in the 1990s. A review of his earlier work suggests that the term "user experience" was used to signal a shift to include affective factors, along with behavioral concerns, which had been traditionally considered in the field. Every product/service solves an unmet need or a glaring problem. Solving an unmet need or a critical problem is one thing. Creating a delightful experience while getting work done is another thing. While any user might start using the product to accomplish some tasks, one of the aspects that might retain users or improve loyalty is user experience. 
User experience is not a single discipline, but a gamut of disciplines put together. While some people think of Usability and UX as same things, many don't know about several other disciplines that intertwine with UX itself. Dan Saffer, appeared to take that first step towards summarizing that in a pictorial format. Below mindmap is a brief  summary of Dan's idea:

UX is Collective Genius

In a typical project, some of us blindly assume that UX is designer or creative head's responsibility. In reality, UX is not about solo genius. UX is about collective genius. One needs to think of users and their experiences while doing feasibility study, developers have to empathize with users while developing technical design, designers have to study if users would use this product and be happy about it, testers have to consider testing for specific pain points in the product that might drive users away and implementation / maintenance teams have to hold the customer to high standards while customizing the product for them. In short, each and every team member contributes to UX in his/her own way, based on the understanding they have, of the product and the users.
In that sense, UX is everyone's responsibility

05 November, 2015

Contextual Keypad - Engaging Users via Mobile Inputs



Engaging Users
In September 2015, I was at Los Angeles, California for STARWEST 2015 conference. I was staying at a hotel, stone's throw away from Disneyland. I took some time to attend Color Show at Disneyland. The first thing I spotted was the Mickey Mouse Wheel. I was mesmerized by the colors on the wheel. Walt Disney and his imagineers must have put lot of heart and soul into Disneyland. Otherwise, how would they think about engaging tourists by letting them control the lights on the wheel. This Mickey Mouse wheel displays a visual pattern to people gathered in front of it using different colored lights. Users can access 'Mickey Mouse Wheel' website and repeat the pattern on the screen. (Note that website URL was simple to be shared verbally over a microphone and free wi-fi was provided to everyone in that play zone.) Whosoever got the pattern right could control the lights on the Wheel for one whole minute. And this went on for  15 minutes before actual Color Show began. What I witnessed was thousands of tourists - both adults and kids playing this game on their mobile devices. Walt Disney must have accomplished his dream - of involving adults and kids alike in having fun, together.

App Usage Statistics
There are different ways of engaging users, yet very few organizations like Disneyland think about it and take the effort to put it to use. More engaging an app is, more users use the app often. 
The Nielsen Company reported that mobile users use up-to 37 apps per month as per last year's reports. Studies by Google in 2014 revealed that users use up to 8 apps in a day. This data only increases the pressure on App based companies to make their apps better.
Contextual Keypad for Touchscreen devices
Reading content on mobile devices is one thing. Accepting inputs from users and providing context-specific output is another thing. Whether users provide suitable values or not depends on how input fields are designed to capture inputs. In this article, we look at how to optimize mobile input experience by providing contextual keypads.
Consider a form with different input fields  like text, email, post code, phone number, IP address, Web URLs and so forth. By default, input type is set to 'text' for most part. When users enter data into these fields, an alphabetical keypad is displayed. If user has to enter an email address, he must go looking for '@' and '.' by tapping on respective soft key. 

Lets take an example of how mobile app 'Polar' handles contextual keypad on their registration form title 'Join'. 

Text Field
When a user taps on Username\Full Name field, an alphabetical keypad is displayed. Note that a button 'Next' is placed on the keypad, letting users know that they can move to next field on the screen, without moving away from the keypad. When user now taps on Full Name, the 'Shift' key is automatically activated (not captured in the picture.) This way, user doesn't have to explicitly tap on it to write 'Full name' in title case. 


Email
Suppose user taps on Email field to enter an email address, suddenly, there is a need for '@' and '.' fields, but they are not available on alphabetical keypad on many apps. Polar app displays an email field optimized keypad. Note in the picture how space bar key compressed itself to make way for '@' and '.'. 
Interestingly, when user taps on 'Password' field, alphabetical keypad returns without 'Next' button as 'Password' is the last field on the form. This is a good example of how keypads are displayed based on context.
 

Web URL

Placing focus on a web address field displays slightly different keys including '.', '/' and '.co.uk'. 

Numbers - Phone numbers, Post codes, Others
Numeric keypad is displayed for phone numbers. For post codes, a little different keypad is used.



How to test contextual keypads?
  • Make a list of all screens which accept input fields
  • Categories input fields into Text, Email, Password, Phone number and so forth
  • For each input field, review if correct keypad is displayed

A general rule of thumb is to use minimal forms on mobile apps. Better, if we avoid text inputs from users, as much as possible. Even better if contextual keypads are used whenever users are forced to provide inputs. If you want to improve the experience of your mobile apps, then code the app to display contextual keypads for corresponding input fields.

What's your experience with contextual keypads?

20 October, 2015

Customer Touchpoints



This article was originally published on TechWell.
When testing a product, testers often are focused on which flow the user executes or how the user interface looks. It can be easy to neglect how support processes such as call verifications, email communication, online chat, and service request processes function. Does a user receive a welcome email upon joining the site? Does he get a verification call from the company? How is a complaint from a user handled?
These are all examples of customer touchpoints: the interface of a product or service with users before, during, and after a transaction. Touchpoints go a long way toward defining customer experience and an organization in general.
Key touchpoints include:
  • The organization’s website
  • Its brick-and-mortar store
  • Calls and text messages
  • Email
  • Chat channels
  • Service requests
  • Feedback
  • The field service team
How user feedback is handled also builds credibility to the organization’s care for the users. If an email complaint from a user is never acknowledged or responded to, she likely wouldn’t try contacting the organization again. Instead, she would complain online in a forum or over social media, sharing her poor experience with millions of people and possibly discouraging others from buying the same product or service.

Measuring Customer Touchpoints

Products have to be tested for different touchpoints to measure users’ experiences. Consider an example where the user needs to call a call center in order to activate a new cell phone number. Testers could use a questionnaire with queries such as:
  • How long did it take to get to the call center analyst?
  • Was the analyst courteous while communicating?
  • Did the analyst take corrective action appropriate for the situation?
  • How did you feel about the entire interaction?
These questions might seem unrelated to testing, but it’s just that instead of testing the product, they’re testing the process. And if we don’t test the processes that serve our users, the products themselves will fade away.
Another technique to measure customer touchpoints is recording calls and emails from customers. The traditional approach is to measure response times in seconds or minutes, but this leads to analysts responding to customer calls or emails in the quickest possible time rather than focusing on actually solving problems. Many call recording procedures now are monitored for studying the patterns of calls and improving the effectiveness of resolving customer issues.
The goal is always to provide users with a delightful experience, but this can be especially important when it’s in response to an initial customer complaint. Bigbasket.com is one of the leading online grocers in India. Bigbasket has a no-questions-asked return policy and compensates a customer every time it is not able to deliver as promised, resulting in high customer loyalty and repeat business.
It is becoming increasingly important to create good customer touchpoints because users are no longer looking just for products that serve their needs—they also want engaging experiences while using the product.
I presented the tutorial User Experience Testing: Adapted from the World of Design and the session How to Design a Custom Mobile App Test Strategy at STARWEST 2015, between September 27–October 2 in Anaheim, California.

01 September, 2015

Wireframes Testing - Part II [How did I do it]



The first entry in this two part article talked about the fundamentals of wireframes testing and its advantages/disadvantages. In this entry, I will touch upon my journey with expert review method of testing wireframes.
Expert Review
Expert Review involves a subject matter expert, reviewing the wireframes and providing feedback, from the gamut of his/her past experience.
High Fidelity Wireframe
A wireframe which is quite close to the final product, with high level of detail and a good indication of the final proposed product with good aesthetics and functionality is high fidelity type.
Consider a high fidelity wireframe in the picture above. This wireframe includes placeholders for tabs, hyperlinks, images, search box, breadcrumbs and others. In short, this wireframe contains the layout, navigation and hints on how the product might work or behave.
Low Fidelity Wireframe
A quick and easy translation of high-level design concepts into tangible wireframes constitutes a low fidelity wireframe. In the above picture, placeholders are defined at a very high level, providing information on layout and structure of how the product might appear on low fidelity wireframes.
Problem Context
An year ago, I happened to test high-fidelity wireframes for a web-based product used by call center folks to sort incoming service requests and process them into appropriate queues. The product sounds simple, until the time you hear that the analyst has to pick each service request, convert it into a format that another automated system can understand and push it into different queues depending on the type of service request. Each analyst has to process at least 100 per day and the current system was too un-friendly to let analysts work productively, without making mistakes. Hence, the need for re-design!
In their quest to help analysts use the product better, the development team wanted feedback on the wireframes they had developed, which they hoped would fix some of the challenges the analysts were facing, if not all.
Wireframes Testing using Expert Review Method
High Fidelity wireframes have information architecture and content strategy, fairly sorted out, although wireframes are early work products. This means that in addition to layout and navigation, placement of information and presentation of content are described to good degree that is good enough to make a call on whether the product conveys what it is built for.
Layout
 A low fidelity wireframe above has few placeholders within the layout. Once can provide feedback only on the structure of the layout, positioning of placeholder elements and possibly the titles used. Beyond that, this is a pure skeleton of how the product looks like and has little scope for review.
On the other hand, a high fidelity wireframe provides better scope to review not just the layout, but a major part of the product itself.
User Interface Elements
Validating the need for UI elements is extremely useful while reviewing wireframes. This is when, one can make choices of selecting sliders or pickers over dropdowns or accordion views, based on the context of usage or make other appropriate choices. UI elements include:
  • Landing Screen (Look & Feel)
  • Header / Footer information
  • Title (Browser Title and Page/Screen Title)
  • Labels and other Tech Jargon
  • Logo / Buttons / Icons
  • Images
  • Text Fields
  • Settings (Login / Sign In / Logout)
  • Date / Copyright Format
  • Placement of Scrollbars, Dropdown menus
  • Accordion Views
  • Others
Navigation
  • Buttons / Links as visual cues
  • Workflows
Content
  • Presentation
  • Alignment
  • Size and Style
Functionality
  • Existing functionality – features that depict existing functionality
  • Missing functionality – features that may be missing, although, highly relevant to the context of the product.
Above information may or may not be available to full degree in the wireframes. Depending on the type of wireframes provided, it would be good to review them and provide appropriate feedback. Feedback can be provided at two levels:
For each item (element based feedback. For e.g. a particular dropdown may be positioned wrongly on the landing screen)
At a product level (Navigation could be streamlined better w.r.t ‘Translation’ feature)
Challenges I faced
One of the biggest challenges with reviewing wireframes was the fact that transitions, interactions and other subtleties, are not visible and need a further level of probing/questioning to get clarity. Although, at wireframing stage, we might not worry much at this stage, it always helps to outline interactions and behavior of the product early on. At each screen level, I would put a list of all interactions a feature might have with other features and design how interactions can happen. This way, many flows can get sorted out upfront.
Summary
Wireframing, either on paper or using software is much cheaper, faster and offer better benefits like portability and accessibility compared to a working prototype. Today, ‘Go-to-market’ cycles are shrinking with lesser time to design products. This means that spending several man years on programming an interactive prototype is a costly affair. A key deciding factor to create an ecosystem that believes in wireframes begins with creating great wireframes that communicate well and iterate on them based on inputs from different kinds of users.  
Wireframes have personally helped me gain design insight and find usability problems early; which means we save *some* time, *some* effort and *some* money that might have been spent on building *big-failure* products.
How have wireframes helped you?

10 August, 2015

Wireframes Testing - Part I



A wireframe is a rough skeletal guide for the layout of a website or an app, done with pen/paper or using wireframing software.  Sometimes, wireframe is also known as a screen blueprint.
Wireframes are usually created to understand the layout, interface, navigation and functionality within the product and how they are stitched together. They are low-fidelity work products; this means they lack graphics, color, and other elements of visual design, for most part. 

Why Should You Test Wireframes?

Frank Lloyd Wright, once quoted, “You can use an eraser on the drafting table or a sledgehammer on the construction site.” Wireframing is one of the most valuable tools for usability testing, early in the cycle. A lot of problems can be fished out, early on, even if users can play around with skeletal wireframes. Early information in turn helps teams fix gaps early on, allowing them to build better products.

How To Test Wireframes?

Testing wireframes is an age-old concept, employed by very few organizations. Before the explosion of startups, people often chucked everything in SDLC to create a working product that can bring money quickly. However, with concepts like Kanban, Lean and other methodologies, having a minimal viable product with good UX design became an obvious ask. This is where some teams, started getting their wireframes tested. There are three ways of testing wireframes:
1. User Testing
In this method, uses are invited to test interactive/non-interactive wireframes and their feedback is captured either by asking them a series of questions about the wireframes or designing a few tasks they can execute and provide feedback.
2. Remote User Testing
This method is similar to user testing method, except that this is not done face to face with users, but using video conferencing and other online collaboration tools.
3. Expert Review
This method employs a subject matter expert to review/test wireframes and provide detailed analysis.

Wireframing Tools

Wireframes can be hand drawn sketches on paper/whiteboard or they can be produced by using wireframing software – some of which are free.
Axure
Axure is a desktop app that runs on Windows and Mac. It has powerful features not just to create low-fidelity wireframes, but also highly interactive mockups.
Balsamiq
Balsamiq is a traditional wireframing tool with a focus on “rough sketches” that are close to hand drawn drawings.
The Pencil Project
Pencil is free and easy to learn with creating simple sketches
There are many other tools in the market. One has to pick a tool that suits their context. In recent times, Axure has become a go to tool for creating ‘walking wireframes’ that are dynamic by design and mock real functionality, interactions and navigation, all in a single place. Axure goes a step ahead in creating high-fidelity wireframes / mockups depending on the target platform – be it web, or mobile.

Advantages / Disadvantages of Wireframing

Advantages
  • Communicate innovative design ideas quickly
  • Facilitate early feedback mechanism to clients
  • Provides an opportunity to fix critical bugs/problems in wireframes early in SDLC
  • Easy to make changes to wireframes, as compared to making them on a live product
Disadvantages
  • Wireframes is limited to the power of the tool used
  • Interactions within wireframes may not be self-explanatory at all times
  • Poor collaboration during wireframing stage with corresponding scrum teams can destroy the benefits of creating wireframes in the first place

Design First, Then Prototype Approach

In the olden days, wireframes or other prototypes were conceptualized first. Then design ideas would come into play on what design elements needed to be added to the product. Some tools would have existing limitations hence limiting the implementation of unique design ideas. Today, better designs are more engaging, playing a key role, in areas like, preventing user abandonment and so forth. Having said that, it is important to backtrack and Design First and later figure out how to create wireframes or prototypes as per the design.
The next entry in this two part article will touch upon expert review method of testing wireframes and how it benefits teams.

23 July, 2015

heuristic evaluation - What's That?


A heuristic is a fallible means of solving a problem. A heuristic evaluation is a method where a usability evaluator helps to identify usability or user experience problems in a product, against an existing list of widely accepted usability heuristics. heuristic evaluation is also called as expert review in some organizations.
Many have developed heuristics in other spheres like testing and development. The earliest usability heuristics were defined by Jakob Nielsen and Rolf Molich in 1990, in their paper 'Improving a human-computer dialogue', which at the time, was mostly targeted towards web and desktop products. Over time, product ideas have evolved, technologies have become better and complex hence changing the way usability is done. heuristic evaluation followed suit. 

How is heuristic evaluation done?

A simple heuristic evaluation process includes below steps:
  1. Identify the usability expert (in-house or external) who will review the product against a defined set of usability heuristics
  2. Define the scope for evaluation? Is it for specific features or just, for newer features or for entire product
  3. Define the environment in which evaluation needs to be performed? (live product location / test environment / sandbox)
  4. Direct usability expert to perform evaluation and fill an evaluation form highlighting each and every usability problem encountered at a task level.
Once the evaluation is complete, the results must be analyzed with corresponding stakeholders who might want to make appropriate decisions with respect to the product based on usability problems identified.

Usability Report

A typical evaluation report contains usability findings and detailed explanation of the problems encountered. In my experience, when such reports are floated across technical teams to fix these problems, they are left wondering about what they should do. For example, a usability problem like "I was unable to navigate to 'Home' from 'Settings' screen" doesn't really tell anything to the developer on how to handle this problem or provide a fix (not until he delves deeper into it). Hence, it is good to insist on the usability expert to provide feature recommendations, in addition to usability problems. This means, that for each usability problem identified, sufficient screenshots, investigation notes and subsequent recommendations (of what exactly needs to be done to fix the usability problem) are also recorded. In some cases, usability experts even include best practices of competitor apps to advocate their findings better. 

One evaluator is NOT a user

Arnie Lund, once said, "Know thy user, and you are not thy user". For the same reason, usability findings found through heuristic evaluation often get discounted as 'some user's opinion that doesn't matter.' There is an additional risk of the evaluator being someone with 'not so normal' thoughts, and perhaps a wrong representative of the average user. This leads many to frown upon heuristic evaluation.
In fact, Jakob Nielsen, in his research, found that one evaluator is no good. According to him, it is good to have at least 3 to 5 evaluators who might end up finding most of the usability problems in the product. This approach also helps in fine-tuning the findings, to actually differentiate between low hanging fruits and really bad usability problems. The results are then aggregated and evaluated by an evaluation manager, who is different from the main evaluators. The impact of such a heuristic evaluation is much better than the one, done by single evaluator.

A Complimentary Approach

On few e-commerce projects, I applied a complimentary approach. While one/two evaluators provided feedback on the product, a separate team performed user testing with 15-25 users. At the end of user testing, findings from users were collated to make a 'Usability Report.' Results of both the reports would be compared by an evaluation manager who would then identify feature recommendations based on inputs provided in both reports. This approach worked really well for startups.

Expert Reviews

The success of heuristic evaluation / other complimentary approaches is defined not just by the process, but by the strength of the heuristics involved, type of information captured and the way in which it is presented to stakeholders. This is why, heuristic evaluation isn't something that can be done, 'on the fly' by anyone. It needs to be performed by experienced practitioners who are aware of their biases and present unbiased findings. Such evaluations performed by usability experts are called Expert Reviews. 
In short, heuristic evaluation is done by evaluators who refer to specific heuristics and evaluate the product against them. Every usability problem found using this method is mapped against an existing heuristic based on which the evaluation was done. Expert reviews, on the other hand, are performed by subject matter experts in an informal atmosphere using a list of heuristics that may not be well-defined at all.
Update (4th Aug 2014)
My teacher, James Bach was kind enough to point out how Heuristic Evaluation is different from heuristic evaluation and throw pointers around few gaps in my understanding of heuristic evaluation. I have updated this post based on my improved understanding of the same. 

29 June, 2015

Recruiting Users for User Testing



Mobile User Personas
I have conducted user testing sessions for several clients, while I was in the services space in different capacities. When I say, 'different capacities', it means some of those were in-house users and some were external. Some testing projects were on a small scale where fewer than 10 users were involved, while others had several scores of users. Irrespective of the scale, few questions that often popped in my head were, "Who are the 'RIGHT' kind of users?", "How many users are good enough?" and so forth. At times, I wondered if the users I hired represented the best representative sample of the real user base spread across globally. Recruiting users is the most difficult and critical part of user testing. Here is how I approached this challenge:

App Context

Suppose, you are recruiting users for testing a 'yet to be released' mobile yoga app that caters to Ashtanga Yoga aspirants. There are several formats of yoga in the market, especially in the western world. Hence, it is important to note that many Ashtanga Yoga practitioners believe that theirs is the most authentic form of yoga ever. Which users from this large community should we consider for user testing of this particular yoga app? Who do we recruit? How do we recruit? On what basis?

Finding the 'RIGHT' kind of users? 

Identifying the right kind of users is a challenging task. Many organizations follow the 'hallway testing' approach where users are randomly chosen as though there were walking on the hallway. These users may not be the best possible sample given diversity factors like geographies, culture, age group, profession, tech-savvy-ness and so forth. It is always good to know who are the users and what are their key characteristics. Without this information, we might just react like horses with blinkers on.

How to recruit users

In above mentioned context, consumers of this app are yoga practitioners, teachers, students and general public. These people may or may not be the users we are looking for. Few of them may not even know how to use a mobile app. Some might be extremely tech-savvy and represent a fairly good sample. Recruiting users depends on asking the right questions depending on the context of the product. The user testing team can design a 'User Recruitment Questionnaire' that helps to screen users and shortlist the most suitable candidates.

User Recruitment Questionnaire

User Recruitment Questionnaire, also known as screener templates, in its simplest form, has three categories:
1. General Questions
This sections asks general questions related to user demography such as:
  • Gender
  • Age Group
  • Occupation / Business Sector
  • Nationality
  • Income Group
2. Product Context-Specific Questions
This section includes questions specific to yoga as the product under test deals with yoga training:
  • Do you teach Yoga?
  • Since how long, have you been teaching yoga?
  • What specializations do you have in Yoga?
  • How often do you teach yoga in a week?
Note: Note that above questions address only the practitioners and teachers at this point. You can include more specifically targeted to recruit yoga students as well.
3. Tech-savvy-ness
  • Are you a smartphone user?
  • How often do you access internet on your smartphone
  • Do you have technical knowledge of using mobile devices?
  • What is your smartphone model (Device Name, Manufacturer and Model)
  • Have you used any yoga apps in the past?
This recruitment questionnaire can be distributed to potential users via E-mail, Google forms or Online survey. Once user responses are available,we can choose which kind of users we want from this list based on the product context and the user demography we are targeting. 

How many users are good enough

Naive user testing teams start with 1-2 users. Few others say 5-10 users are adequate. I have had good output with 30 users on a few projects. The question really is, 'How many users are good enough?' Jakob Nielsen, a User Advocate and Principal of Nielsen Norman Group has done extensive research in User Testing and thinks that 5 users is a good enough number to start with. As per Nielsen, 5 users can find as many usability / user experience problems as compared to a larger number of participants. 
Regardless of whether user recruitment is done through Online communities / Friends & family / Beta / Private Beta, using this approach can be beneficial. Things might not work as expected the first time around. It might take a couple iterations to implement this approach, make mistakes and then fix them before you start to see positive results. Nevertheless, it's worth trying and failing, than doing nothing at all.
What approach does your team take to recruit users? How well has it worked for you?

11 June, 2015

Here's what you did wrong - Recoverability Testing and UX Connection

A few weeks ago, I was on the ground floor of my office, when the elevator arrived. I pressed '4' while I continued chatting with my colleague. We reached 4th floor and noticed the lift didn't stop. I was under some imaginary pressure to prove to my colleague that I had pressed '4' indeed, as she stared at me. While I was explaining what just happened, she said, 'The elevator behavior is right. You are wrong." Apparently, if one pressed '4' and the elevator goes to basement or other lower levels and returns to ground floor, the switches are reset. Was it human error or system error? 
Most failures are evil because they tell us, we did the wrong thing. They tell us, that it's something WE did that resulted in failure. They tell us that WE screwed up. According to Don Norman, over 90% of industrial accidents are blamed on human error. You know, if it was 5%, we might believe it. But when it is virtually always, shouldn't we realize that it is something else? The systems were wrong? Perhaps.
Sidney Dekker, believes, that "human error is not a cause of series accidents, but a symptom of trouble, deeper inside a system". We are humans. We cannot be accurate and precise all the time. We are pre-occupied, we are in different states of mind at different times and we have our own way of living our lives and dealing with challenges. As a result, we commit mistakes. Is that really a failure on our part, or the system was not designed intuitively enough, to be able to avoid mistakes from humans or, even, guide the user when mistakes happen.
Murphy's law states that "Anything that can go wrong, will go wrong". While things can go wrong, helping users recover from such situations can go a long way in building credibility and loyalty with the user. 
Recoverability of errors is a key element to be considered while designing products. When errors occur, the following five elements can repair the damage (to some degree), the error might have caused to users in first place.
1. Provide visibility to the user of what was done
An error occurs when user did something that the system doesn't know how to handle. When users make mistakes and get no feedback, they're completely lost. For e.g, sending an email that's eaten up by a virus, but the recipient doesn't know a thing about it. When error occurs, it's good to tell the user, exactly what the user did a few moments ago. This way, it might help the user realize whether his actions were right or not and make amends accordingly.
2. Do/Show/Tell the user what went wrong
Once the error has occurred, the user needs to know what really went wrong in the first place. The message displayed to the user should be clear enough to state what actually went wrong. This information is additional to providing visibility above where user is told what he did vs what went wrong after that.
3. Indicate how the user can reverse unwanted outcome
Users are least interested in geeky or innovative error messages. They just want to get out of the error situation as soon as possible. Including error codes like 'Type 2 error number 10000345 occurred' is least informative and/or useful. The error message displayed should tell the user how to reverse the error or what is the next best thing to do, to recover from this error. In short, how can the user go back to base state of the application where he left off before the error occurred is critical for the user to know. Additionally, giving useful advice to the user to fix the problem is good. For e.g, On an e-commerce app, just saying, a book went out of stock is definitely worse when compared to providing 'Notify' feature that notifies you when the book is back in stock.
4. If reversibility is not possible, indicate this to the user
In some cases, reversing an error is not possible. In such a case, it is best to indicate the user to force-close the application and start from scratch or from a specific location in the app. Take an example of password fields. When a user enters the most simplest password, an error message throws with a big list of instructions for a strong password. Instead, if the user is warned upfront about these instructions in the form of a label below the password field, the hassle could be avoided. 
5. Preserve User Data
The app must be able to preserve user's data at all times and never ever corrupt or leak confidential information. Period!
An error that can be made will be made.  For e.g. if you miss-spell 'Murph's Law' in Google, it displays results, but also displays 'Showing results for 'Murphy's Law'. It turns an error into a good feeling. Transforming an error situation to actually helping the user is an intelligent way to deal with an error. Here's a message from Don Norman about error messages: "Error messages punish people for not behaving like machines. It is time we let people behave like people. When a problem arises, we should call it machine error, not human error: the machine was designed wrong, demanding that we conform to its peculiar requirements. It is time to design and build machines that conform to our requirements. Stop confronting us: Collaborate with us."
Graceful Recoverability from errors defines a new avenue for organizations to create great user experiences.
 What do you think?

01 June, 2015

How to Test User Experience

This article was originally published on TechWell.
User experience (UX) involves the range of emotions a user feels while using a product or service. The product or service may have amazing features and capabilities, but if it fails to delight the user, the person will hardly use it. United Airlines is setting an aspirational target for its customers' UX. It's striving to create an in-flight experience that is “legroom friendly," "online friendly," and "shut-eye friendly.”
Understanding how users feel involves becoming aware of man-machine interactions. This knowledge then can be used to improve the overall user experience. Sadly, many of those who talk about UX as though it’s a set of tools and approaches often forget about the human side of products. A range of tests can be performed while a user is engaging with a piece of software to ensure that the user is never forgotten at any point of the development process.
Emotional Response Test
Users don’t have scripts to follow in one hand while using a product or service in the other hand. By probing users and recording their emotions, ranging from amusement to annoyance, UX teams and testers can gather invaluable information about what makes a product great—and what makes it a nuisance.
User experience professional Robert Hoekman Jr. has a list of tenets on the value of user experience strategy. One of the tenets is "A user’s experience belongs to the user. An experience cannot be designed. It can, however, be influenced. A designer’s job is to be the influencer."
First Impressions Test
What can you tell about people or websites in a short time? A lot. Tests like the Five Second test show that student evaluations given after the students are shown only a few seconds of video are indistinguishable from evaluations from students who actually had the professor for an entire semester. Additionally, visual appeal, navigation, and click tests give inputs about users’ early impressions of products and websites, which can be used to understand what makes a delightful user experience.
User Pain Points
What truly delights users is implicit most times. Bill Gates was absolutely correct when he remarked that unhappy customers are a great source of information for learning about UX. You can gather user pain points from complaints and warnings by talking to users more often and by observing them using websites and software, and then recording their emotions. These days, it’s easy to get customer feedback at the drop of a hat through social media.
As Steve Jobs said, design is "not just what it looks like and feels like. Design is how it works.” Testers can use a variety of heuristics to tell the UX team what does and doesn’t work for users so that the entire project team knows exactly what gives their customers the greatest experience possible.
How do you test User Experience?

08 February, 2015

Competitor Analysis: A Simple How-To Guide To Get Started


Competitor Analysis is an assessment of the strengths and weaknesses of current and potential competitors of the product in hand. This analysis highlights not just positive and negative aspects of the product, but also potential opportunities and threats. Some organizations vaguely call it product ‘SWOT’ analysis.

Competitor analysis can be done at multiple levels. One can just pick a super set of all features available across the products that are comparable and map each product’s capability against that feature. Additionally, comparison can be done at a technology level. For e.g. comparing Teradata with other data warehouse products. Some comparisons can be done across specific market segments as well.
  1.      Product Features
  2.     Technology
  3.      Market segment
  4.      Geographical areas
  5.      Others
Competitor Profiling
Answering the question, “Why do you want to perform competitor analysis?” is a critical aspect of performing competitor analysis. This in turn leads to identify several key indicators based on which this exercise can be carried out. Few indicators that are instrumental in performing competitor analysis are listed below:
  1.      Industry / Domain
  2.      List out potential competitors / competitor products
  3.      Users of competitor apps (to understand why these users like the competitor more)
  4.      Competitor’s market share and why they may be ahead
  5.      Competitor’s strategy (sales, marketing, branding, promotions, advertising)
  6.      Social media presence for mass market apps
  7.      Areas they excel at where the product is lagging way behind
  8.      Cost / Distribution factors
An organization gains a competitive advantage only when it outperforms its competitors in a way that matters to the customer. It is hence important to ensure that the product has key differentiators that are clearly a hit with customers. This doesn’t mean that one has to rain all the features into single product and offer a gigantine mixture of all solutions in one place. Dell, for example, is known for mass customizations depending on stakeholders needs.

Another area is the costing aspect of the product itself. Customers are constantly wondering if they can find a great quality product that solves a problem or unmet need at a very cheap price. They are constantly on the lookout for organizations that are cheaper in pricing. This is where extensive research needs to be done and arrived at a suitable price that caters to different market segments, customer personas and economies.

The last aspect of gaining a competitive advantage is w.r.t the stickiness of the product. “What’s sticky about your app?” can make or break any product. This is different from possessing differentiating features. For e.g., “What’s so addictive about Facebook that people would chuck filling timesheets and chat with friends online at the cost of organizations money?”. Products need to be capable of getting customers addicted with its inherent purpose.

If someone is smarter than you, make him your friend. If he can’t be your friend, buy him out and kill him. A series of acquisitions and mergers in similar market segments is a testimony to the fact that many organizations aim to create a monopoly in the market either by buying them out or owning their technology to move forward faster.

Different Approaches to Competitor Analysis
There are several approaches to performing competitor analysis. I have listed a few I have personally explored and found it successful in communicating apt feedback to stakeholders

  Star Ratings based
       Features/Scenarios can be rated by giving star ratings – single star meaning poor
       and five stars meaning outstanding

       Points based
       Features/Scenarios can be rated using a points system of 1 to 5 per feature or scenario
      
      Subjective feedback
      Some stakeholders prefer detailed subjective feedback as it’s easy to understand
      underlying analysis using this feedback instead of using symbols/numbers i.e., stars and points

Implementation Example
Consider a simple example of performing competitor analysis of Flipkart.com website with Amazon. In using a mix of approaches (2) and (3) listed above. One can start with identifying the purpose for this exercise. For the benefit of this article, let’s say the purpose is to "Identify features where flipkart.com lags behind amazon.in"

To accomplish this, make a list of all features available on both websites. Analyze every feature in detail on both websites and associate a rating to it. You can also provide subjective feedback explaining which feature is better and/or why you think that feature is better.

At the end of the exercise, this is how the competitor analysis document snippet looks like.



In some cases, you can also make a list of different tasks and measure their efficiency using metrics like ‘success of getting task done’, ‘time taken to complete the task’, ‘customer satisfaction level’ and so forth.

Customer Touch points testing is one metric to capture for main product and the competitor product. This helps analyze how the organization handles user complaints, escalations or queries. Media like call, email, chat, service requests, feedback and recommendation tools and so forth.

Once this activity is done for all features, an overall rating can be arrived at as shown below.


Additionally, we can provide a summary report of our findings and elaborate on Product Stickiness.

Competitor analysis should be initiated with a well defined objective. Once it is complete, respective stakeholders must work towards fixing the gaps identified and contribute towards building a better product. There are different approaches that can be used to perform competitor analysis including as simple a method as visiting your competitor as a potential client and getting insider news :-). There are professional organizations which do a great job of performing such analysis, which comes at a high cost, yet valuable enough. Which method one chooses is less important over what will be done with the results in the end. In my experience, few organizations invested lot of time and performed competitor analysis, only to pack those results and trash them in the most safest product file. If you want to perform this analysis, be sure and be serious.

To summarize, anyone can learn to do competitor analysis in simple steps mentioned above and evolve it over a period of time. The results can be shared with appropriate stakeholders about the pros and cons in the product at hand and make suitable improvements in subsequent releases.

Reference
http://en.wikipedia.org/wiki/Competitor_analysis