Transforming Rammas from a novelty into a trusted, task-driven customer service assistant through conversation design, UX research, and a complete redesign.

Client
My Role
UX/ UI Designer
Location and Year
Dubai, 2021
Project Status
Live
Rammas is DEWA’s humanoid robot, originally developed by SoftBank Robotics and deployed across customer service centres in Dubai. Designed to assist visitors with queries using conversational AI, the hardware was capable, but the experience didn’t live up to that potential.
I led the end-to-end redesign of Rammas, covering UX research, conversation design, interaction flows, and visual language, working closely with developers and stakeholders throughout. The goal was to shift Rammas from something users ignored to a service they actively chose to use.
The redesign led to a 40% increase in customer satisfaction, a 20% decrease in bounce rate, and an average of 150+ customer interactions per week within the first three months of launch.
Some metrics, figures, and project details in this case study have been modified or omitted in accordance with a non-disclosure agreement. The work and outcomes represented are genuine.
Why the redesign was needed
Full potential, barely used
When Rammas was first deployed, it was used mainly for basic greetings and simple queries, far below its actual capability. The features that did exist were poorly executed, leading to interactions that felt incomplete and frustrating. Most users either didn’t notice it, didn’t understand what it could do, or disengaged quickly after trying.
Feedback from the initial audit made the gap between perception and potential clear. Positive reactions such as curiosity, novelty, and surprise at the Arabic language support showed there was real opportunity for engagement. The limitation wasn’t the hardware, it was the experience.

User feedback from post-interaction surveys showing a mix of curiosity and frustration

Not a hardware problem. An experience problem.
Rammas wasn’t constrained by technology. Built on the SoftBank Pepper robot, it had cameras, microphones, depth sensors, and a tablet interface that enabled voice interaction, gesture, presence detection, and multimodal communication in dynamic public spaces.
But none of this translated into the experience. Instead of leveraging Pepper’s ability to guide attention and communicate from a distance, the interface relied on dense text and buried options. In a busy environment, users couldn’t quickly grasp what to do thus reducing a highly interactive system to something that functioned like a static screen.
The challenge wasn't building something new from scratch. It was unlocking what was already there.
The approach we took
Shifting the interaction model
The core issue wasn’t just usability, but how the interaction was structured. Rammas relied on users to lead the conversation, with little guidance or clarity on what was possible.
The redesign shifted this from a passive interface to a guided experience, where users were led through clear paths based on intent, reducing uncertainty and improving completion rates.
What we achieved at a glance
What began as a usability fix evolved into a full rethink of the interaction model. The case study below breaks down the decisions and process. Here’s a snapshot of what we achieved at the end of this project.
Key achievements
→
Clear, guided interactions that reduce uncertainty and drop-off
→
Faster access to key services through structured conversation paths
→
More intuitive interactions aligned with real user behaviour
→
Seamless recovery and escalation without dead ends
→
A more approachable experience that encourages continued engagement
Defining success
Aligning on what success looks like
With the approach agreed, we aligned on what a successful outcome would look like across four dimensions:
Customer Satisfaction
Turn Rammas into a reliable, easy-to-use assistant that genuinely helps users complete their queries without needing to escalate to a human agent.
Engagement
Improve discoverability and interaction rates by making Rammas feel alive, approachable, and useful from the very first moment.
Retention
Increase engagement by guiding users through to task completion and reducing early drop-off.
Staff Load Reduction
Enable customers to handle common queries independently, freeing up human agents for more complex cases.
Key Performance Indicators (KPIs)
We defined directional benchmarks based on the most common issues we aimed to solve.
Satisfaction
Measurable improvement in post-interaction survey scores
Bounce Rate
Reduction in users abandoning interactions before completion
Interaction Volume
Increase in average weekly interactions post-launch
The process behind the transformation
Research first, design second
Before design began, the priority was to fully understand the experience and its context. This included user feedback from service centres, an assessment of the robot’s capabilities, and benchmarking against similar systems. Only then did design begin.
Understanding the current landscape
The first step was a thorough UX audit of the existing experience, combining user feedback from post-interaction surveys, focus groups at customer service centres, and a review of available usage data.
Alongside this, I conducted a heuristic evaluation of the interface to identify issues that hadn’t emerged directly from user feedback. This included accessibility gaps, problems with information architecture, and interaction patterns that didn’t account for the physical context of using a robot-mounted screen.

Heuristic evaluation of the legacy interface, highlighting accessibility gaps and navigation issues
Assessing capabilities and benchmarking
With the core issues mapped, I researched the robot’s hardware and software capabilities to understand both its limitations and its untapped potential. This ran alongside a competitive analysis of how similar technology was being used by other organisations locally and globally.
Given the project’s tight timeline, I used an impact-effort matrix to prioritise improvements that would deliver the most value in the shortest time, ensuring the first release addressed the highest-friction points without overextending the scope.
Benchmarking Rammas against similar deployments locally and internationally

Impact-effort matrix used to prioritise improvements for the first release
Ideation & information architecture
With research complete, I began restructuring the information architecture, defining how users navigate the experience, which conversation paths are supported, and how flows guide users from greeting to task completion without dead ends.
Speech patterns and response logic were defined alongside the visual flows, ensuring the robot’s verbal and on-screen communication worked together as a single, coherent experience rather than two separate layers.

Restructured information architecture mapping user navigation paths

Building the conversation in Dialogflow
With the conversation flows defined, we moved into building them in Google Dialogflow, the platform powering Rammas’s natural language understanding. Each flow was broken down into intents, defining how the system recognises and responds to user input. Training phrases were written for every intent, covering different ways users might ask about bills, report outages, or request services, in both English and Arabic.
Each intent was paired with a response, with fulfillment webhooks handling cases that required live data from DEWA’s backend systems. Fallback intents were set up for anything the system couldn’t match confidently, guiding users back on track or escalating to a human agent when needed.
This was before large language models were widely used, so every interaction had to be explicitly designed. That constraint made the work more deliberate, with no room for ambiguity in how conversations were handled.

A snapshot of the Dialogflow console showing the one of intents built to handle DEWA customer queries
Sketching & wireframes
Early ideation focused on exploring a wide range of directions before committing to a structure. Given the timeline, I moved directly from sketches to high-fidelity designs, using sketches as the primary validation step.

Early ideation sketches exploring a wide range of interaction directions

Refined sketch used as the primary validation step before moving into high-fidelity design
Where it all came together
The finished product
The redesign shifted Rammas from a static, formal interface to a more approachable and responsive experience. A softer visual language and clearer layouts improved readability and reduced friction.
Conversation flows and microcopy were refined to guide users through each step, with built-in recovery paths and escalation options. This made interactions more predictable, reduced drop-off, and increased overall engagement.




The impact we created
Rolled out and recognised
Rammas was initially launched in the DEWA Head Office as a test case before rolling out to all other offices. Usability testing sessions were arranged for one month post launch to gather feedback from real users on the new designs and validate the changes and/or make any iterations based on the feedback.
The results were overwhelmingly positive with a few suggestions on further improvements that we included in our enhancement plan.
Measured via post-interaction surveys
Measured via analytics for the first three months post launch
Average for the first three months measured via analytics
Post launch
Following a successful launch at the head office, Rammas was rolled out across all DEWA locations, including the Digital DEWA office at Emirates Towers. It was also featured at the DEWA Pavilion at Expo 2020, where it interacted with over 500,000 visitors during the event.

Rammas at the Digital DEWA office in Emirates Tower, Dubai

Visitors having a chat with Rammas at the DEWA Pavilion at Expo 2020
What I learned along the way
Reflections & takeaways
This was my first time designing for a physical robot, and it introduced challenges you don’t face in purely digital products. The user’s position, the surrounding environment, the robot’s movement, and the relationship between voice and screen all had to be considered. Every decision needed to account for this physical context, not just what appeared on the interface.
The biggest shift was moving from a screen-first mindset to an experience-first one. The screen was only one part of the interaction. What mattered just as much was how the conversation flowed, how the system responded, and how natural the overall experience felt.
Next up
Emirates NBD Mobile Banking Transformation
© 2026 Saim Alshafi








