A case study of a team's process. Zoom in to the research process performed on the
Double-Dimond methodology:
The vision in implementing assets in the system was to create a multi-profile ability of a single asset. Meaning potentially every asset will be monitored by more than one profile. Currently, every machine is monitored by only one profile. There’s also a terminology change that was partly applied in the plant product and it was time for this one to catch up. The same goes for the experience as we have shared personas across the products.
The Strategy for this change is to design a dedicated area with asset-related data for slicing, dicing, and taking action.
How? By creating a dedicated area for each asset with the representation of the data based on asset type, within an intuitive user experience and ease of information consumption.
Objectives:
- how people interact with current product and how do they feel about the process
- what information do they need for a single asset
- what actions do they expect to have regarding each
Research audience:
Existing customers of Cyber (HQ operators, site engineers, site managers, higher management)
methods:
Qualitative and quantitative.
Generative Research
We needed data to base our decisions on so the main goal of the research was to gather thorough information from different end-users of the product to understand their needs in the information displayed, driven by issue solving when focusing the view into one single asset.
Research Steps
Site map
First of all, we had to understand where we stand. We mapped the existing system's architecture which we presented to our internal stakeholders and R&D team leaders. Besides mapping, we marked all questionable screens which contained the challenging areas in the system's terminology.
Competitor analysis
During this process, 2 other team members worked on a competitor analysis which helped us to understand what demons are we facing and how far are we from the assumptions regarding the uniqueness of our product. Moreover, their input gave us a fresh look at information architecture and representation - as far as it was possible of course.
Prior interviews review
We already had several recorded interviews with users from different customers. These gave us a better understanding on their actions when dealing with issues through the system, who are the personas involved in their processes, what is currently doable, what isn’t, what is good what is bad and what does not exist at all and might cause customers to drop for solutions by our competitors.
External user survey
While listening to the interviews and having them summarised we raised plenty of questions which I turned into a survey. The survey was sent to internal and external customers and end-users. We got respondents with interesting feedback regarding their usage of the system which helped to determine how to implement the new and upgraded design. Some of the highlights were: an action called FIX is never being used by most of the respondents, yet presented on top of each relevant screen. The same amount claimed that the Site’s Name is a top priority for asset identification and when asked about whether they document their steps to reuse or share with colleagues, most said they do and use email for that.
SME interviews
During the process, we had biweekly meetings with some strategic decision-makers. They were pleased, engaged, and eager to assist.
Since it was very important for me to get EVERYONE’s opinion, some of the questions were sent out as an anonymous survey so that each of them will be heard and taken into consideration without being interrupted by other members on the call.
External users' interviews
I had additional interviews with few more customers for verification.
These meetings confirmed some of our assumptions and gave us a deeper understanding of their usage of the products both the HQ system and the plant's, not only related to the homepage but also usage of Alarms and their unimaginable amount.
Insights & fundamental assumptions
later on, we introduced the next fundamental assumptions to the team:
J2BD
5 J2BD were defined at this point in the process
Object Prioritization - Unmoderated Test
We have mapped the collected data from all 3 systems in order to achieve the clearest and best representation of it and then… had to stop after reaching 60 items! My next step was to conduct an unmoderated test of prioritizing items for each asset type.
Review Sessions
After the first phase of design was pleasing to all parties, we moved on to the concept review with end-users and customers.
Success Criteria - Phase 1
We provided 2 success criteria measures for the different phases of the process.
The first one was set before we started the review sessions, and it says that: The presented data meets the needs but partly needs to be added or changed. Each one gave a score of 1 to 5 based on feedback summary:
Feedback Summary
Overall we had 8 sessions with 20 participants to cover the different personas in the system. Here are a few of the highlights of the feedback we heard:
"This page is hard for me"
"People aren't staring at this display for 8 hours. When they come back they need to know what happened recently"
"Right panel on an alarm - the most valuable info here is what people did in the past"
"Very encouraged by what I see. It’s simple and intuitive"
"The focus needs to be on - What's the issue is and how to resolve it & it prompt that"
"Concerned about the performance of the database won’t load within a reasonable amount of time"
"Looks surprising and informative"
and many others I cannot reveal here.
After all, the design received a score of 3.81 out of 5.
Next Steps...
Insights were presented to Product and R&D stakeholders during the process and after the review sessions.
After reevaluation by the Product team, this project was paused until further notice due to a lack of resources for development.