Har Suyash Logo

Har Suyash Bahadur

Improved debugging flow for API tests

Postman

·

Product designer

·

Lead on the initiative

Key contributions:

  • Investigating and communicating with the team about what users actually needed.
  • Used affinity mapping to find patterns in user asks and user journey to align the team.
  • Delivered the final UI with visual grouping and interactions based on the user flows.

Background

What is Postman?

Postman is a tool used by software developers to develop, test and deploy APIs.

What is API and API testing?

API (application programming interface) is what helps a piece of software to communicate with another. Let’s say our data is in a box with slits on one side through which we can put things inside, take things out or take a peek at what exists. This becomes an interface to interact with the box.

An illustration showing a box with some openings on one side where things are inserted and ejected from the other side. This is supposed to visualise the definition of the interface and API testing described on top.

Testing how the box behaves when we interact with it. If the output is as per the expectation, the test passes.

Now for the system to function reliably, certain inputs should give certain expected outputs. Validating this output or response from the server after an API call against a set of expectations is called API (functional) testing.

The problem

What people were asking?

At the start of the project we were looking at isolated user requests that told us to add a link to the request or tests somewhere or export console logs, etc.

I created affinity maps to present these problems with the team in a way to make a case for addressing the testing/debugging experience instead of the isolated Jira tickets.

Screenshot of the affinity map in figjam

Representational artefact: grouping all the smaller user requests and UX fail tickets to point out patterns in the overall flow

Secondly, the simplest and most effective tool a team can use while addressing any UX problem is to create a detailed user journey map. Different people carry a different impression of the journey in their head (or none at all) until it is laid out in front of everyone to define expectations and derive a clearer picture.

Screenshot of the user journey map in figjam

Representational artefact: working with the team after user research to layout the whole user journey.

Getting to the root of the problem

A flow chart showing the typical user flow while debugging where the gap in our product is - Helping users peak in past to know what happened and be able to isolate problems so that they know where to look for the root-cause.

A simplified diagram of what a typical debugging journey may look like.

While our product already allowed users to run tests and view a result of what passed and what failed, it did not provide a fitting experience to start the debugging flow. What we realised will help users here was knowing-

  1. What exactly was sent with the API request?
  2. What was the response received from the server?
  3. What was sent or received before and after the point of failure?
  4. What were the test assertions? (exact expectation that failed)

Current state and challenges

Too much information

While we were able to look deeper in the users’ journey and get clarity into what was needed, there were some challenges we faced-

  1. It was a challenge from the back-end to store all this information at the time of run and it could slow down the operation.
  2. Requests sent and response received have a lot of different type of data like status codes, headers, body, etc. Representing them in an easy to consume way can be a challenge.
  3. We had an existing interface that a lot of users were used to. So we had to find a balance between re-imagining certain flows and maintaining the familiarity to not affect what was already solved for.
Screenshot of the old UI having too much information, no clear visual hierarchy and poor accessibility

Old interface lacking clear info hierarchy: Lines getting cut and difficulty in differentiating relevant information

Screenshot of the old UI with difficult to discover information and unintuitive interactions

Old interface with non-intuitive interactions and difficulty in discovering and consuming information

Explorations and solution

Approach: Two-panel selector

After trying out a bunch of layout and interactions that would make way for a logical debugging flow, we decided on the two-panel selector approach.

  1. The default view is visually not very different from the old UI making it easier to transition.
  2. The job of going through the test results and decide what to focus on can be done alongside the second job of identifying areas of failures without losing context or disrupting the flow of information.
Animated GIF showing the two-pane selection interaction

Selecting a request card with test results and viewing the snapshot of its execution in a split pane view. The left side list helps in deciding what to focus on/ debug and the right side helps to dig deeper and identify potential causes/ where to look for them (request/ response or assertions).

Exploring layouts for left hand side pane

How the flow of information in the LHS pane can be

I tried different layouts to establish the intended flow of information. Although we did not want to change the existing UI drastically in one go, I thought of experimenting with progressive disclosure to optimise for cognitive load instead of clicks.

Showing various layout and UI approaches

Experimenting with layouts to facilitate the intended information flow

Typography and proximity for better visual grouping

Having experimented with these ideas, we wanted to reach to a desired flow with smaller validated changes. Hence, to improve the UI I had to rely on the fundamental principles- typography and proximity for better visual grouping.

A side by side comparison of oldand new UI showing better visual grouping and scanability in the new UI

Small changes to make the UI a lot more scannable.

Exploring layouts for right hand side pane

Showing various layout and UI approaches

Experimenting with different ways of organising the information on RHS pane.

While there were different types of information to be shown, we had to aggressively prioritise for the use-case instead of presenting everything and letting the users struggle with it.

The last approach seemed interesting for debugging errors in request execution but debugging the test assertions was a different scenario. In the latter, the response body was what mattered the most and needed enough space to be consumed properly.

"Confused?!" Going back to the user journey

We went back to the user's debugging journeys in different real-life scenarios and came across the network tab in inspect tool of the browser. This is something developers often use while debugging on the web and we thought to base our designs on a similar experience.

Final approach with left and right side panes

With this we tried to keep the UI as simple as possible and minimise the information as well as navigational load.

  1. Request body is the most important information so we give it the complete pane with ability to format, search through and copy.
  2. Headers include other meta information that is used to communicate with the server. Clubbing request and response headers together helps users compare and understand this communication.

Conclusion

Measuring impact

The debugging journey is very iterative and users may have to run the tests again and again after making some changes. To measure the impact of our improvements, we tracked the number of intermediate runs it takes to debug a test as well as the final number of runs with successful tests.

Our findings almost 3 months after the release of the improvement revealed that only slightly more tests were debugged with less than 5 intermediate runs (~5% change) and lesser took more than 25 runs. The number of successful runs also increased by only 1%.

Thank You!

  • Accessible error signifier

    A small change big impact story of creating subtle yet effective error signifiers and reducing the error.

    UI/UX design

  • Engagement in learning

    An exploratory research to improve students' emotional and cognitive engagement in class participation.

    Design research

  • Assistive device to pour

    A device that helps visually impaired users safely pour hot liquids by communicating with its form.

    Industrial design