Sponsored Link
Looking for a unified way to  viewmonitor, and  debug Playwright Test Automation runs?
Try the Playwright Dashboard by today. Use the coupon code PWSOL10 for a 12 month 10% discount.

A Feature Map Framework with Playwright for Improved UI Testing Tracking

A few weeks back Ben Fellows created a LinkedIn post asking for solutions around a problem he often faces in his work.

As a QA service provider building automation, one challenge I face is allowing the customer to see & engage with the test plan. I've tried TestPad, TestRail, Sheets, & even just more project oriented approaches like Jira. At one point, I just used the Playwright report but got push back. Anyone have a solution they think is great for allowing non-technical participants to review the live test suite that is automated?

I responded with my solution utilizing a feature map to track my automation progress. In this article I'll walk through implementing the feature-map npm package into the playwright-practicesoftwaretesting.com repo. The birth of the feature-map started first as a excel spreadsheet where I tracked areas of the system and scenarios that we wanted to cover with automation. From there Sergei Gapanovich took the idea and built the code which powers the feature-map package.

Overview: feature-map package

The feature-map package is a bare bones library that allows you to create a yaml file with different actions/features within a website and add a true or false value to indicate if there is automation coverage against that feature.

The primary purpose of this tool was to create a map of all the different actions that can be made within a web application through the UI, and map them out. Based on this map, we could indicate if there was any automation coverage that exercised the functionality, if so it could be marked as true. Through this functionality we could track overall test coverage for the UI that we are testing. I like to think about it as a measuring stick. We create a list of items, and I can use this to measure my or my teams progress against our test coverage.

What's nice about this is this can all be created and maintained through the yaml file which is also committed to the same repository as your test code. So as you add automation coverage that covers new actions, you can also update the feature-map with the new coverage that was added, and get feedback on what % of the features have any sort of automation around them.

Hol'up, did you say %, as in test coverage percentage? Well, sort of. I know some folks that aren't a fan of talking about % around test coverage or automation coverage. When you really think about it, that is very difficult to put such an absolute % of this unknown wicked problem. So when I think and talk about a % it is identifying what are the things we want to measure against (the actions throughout the website), adding them to the yaml, and measuring if there is any coverage against those actions. This kind of "automation coverage" and tracking I am ok with, much like a measure stick that we are building

vintage measuring stick

Once we go about implementing the feature-map package our output will look something like this.

Example of feature map calculations

Implementing feature-map package

Now onto the fun stuff, below you can find a link to the npm package below with decent readme on how to implement this into your own repository.

feature-map
A Tool to manually track the automation coverage progress of features within a project. Latest version: 1.0.0, last published: 21 days ago. Start using feature-map in your project by running `npm i feature-map`. There are no other projects in the npm registry using feature-map.

Our first step will be to install the package into our project, to do this run the command from the root of the directory.

npm install feature-map

The next step will be to create a yaml file in your repository that will be our feature map. I created a file named featureMap.yml and went ahead and started to build out the features found within the UI for my project.

- page: "/auth/login"
  features:
    sign in with google: false
    email: true
    password: true
    login: true
    register your account: false
    forgot password: false
- page: "/auth/forgot-password"
  features:
    email: false
    set new password: false
- page: "/auth/register"
  features:
    first name: false
    last name: false
    date of birth: false
    address: false
    postcode: false
    city: false
    state: false
    country: false
    phone rate: false
    e-mail address: false
    password: false
    register button: false
- page: "/category/hand-tools"
  features:
    header: true
    sidebar:
      sort: false
      filters: false
      by brand: false
    product card:
      image: false
      image zoom: false
      title: false
      price: false
    pagination:
      previous: false
      next: false
      number: false
- page: "/product/{id}"
  features:
    header: false
    product details:
      image: false
      title: false
      tags: false
      price: false
      description: false
      quantity: false
      add to cart: false
      add to favorites: false
    related products:
      image: false
      title: false
      more information: false
    footer: false

We don't yet have all the actions within the website on the feature-map yet, this will be a todo item to continue to build out, as we are building out our measuring stick!

Before we get too far it's important to dive into what a yaml file is and what each section represents. There are two key terms to understand to be able to fully understand yaml files. We will be utilizing Collections in our yaml file. There are two things that collections can be:

  • Sequences (lists/arrays)
  • Mappings (dictionaries/hashes)

First off anytime you see the dash (-) it is used to denote a list item or element. So at the highest level we are using these sequences (lists) to organized our pages in our file.

The mappings can be thought of as key value pairs within the yaml files.

  • - page: "/auth/login": This line defines a new list with the URL path "/auth/login".
  • features:This line starts a list of features associated with the current page.
  • sign in with google: false: This line is an example of a mapping which defines a feature/action named "sign in with google" and sets its value to false.

What's nice about this structure is that you are able within a page, map out each feature all the way down drilling into multiple popups if needed in order to track coverage, it is important that each list(sequence) does end with a key value pair(mapping).

# Valid
- page: "/category/hand-tools"
  features:
    header: true
    sidebar:
      sort: false
      filters: false
      by brand: false
    product_popup:
      additional details:
        high resolution image:
          downlaod button: true
          
# Invalid
- page: "/category/hand-tools"
  features:
    header: true
    sidebar:
      sort: false
      filters: false
      by brand: false
    product_popup:
      additional details:
        high resolution image:
          downlaod button:    #note there is no key/value pair listed

More details on the yaml file format can be found below!

What is YAML? The YML File Format
YAML is one of the most popular languages for writing configuration files. In this article, you will learn how YAML compares to XML and JSON - two languages also used for creating configuration files. You will also learn some of the rules and features of the language, along with its

Now that we have the package installed, and a featureMap.yml file we now need to use the functions from the package in our Playwright Test! I think it's worth noting here, that if you are using a different testing library, you can use this library, you'll just have to figure out how/where to calculate the coverage in your suite (this could even be outside of a test runner process). For our example I'll have the feature calculation be a part of the test run.

I'll utilize the playwright config and implement a new project named calculation. With this change I also made the ui-tests dependent on calculation to run. Check the pull request at the end of this section to see all the specific changes.

// playwright.config.ts

...
export default defineConfig<APIRequestOptions & TestOptions>({
  timeout: 30_000,
  projects: [
    { name: "setup", testMatch: /.*\.setup\.ts/, fullyParallel: true },
    {
      name: "calculation",
      testMatch: /.*\.calculation\.ts/,
    },

    {
      name: "ui-tests",
      dependencies: ["setup", "calculation"],
    },
  ],
  testDir: "./tests",
 ...
  },
});

I then created a new file that would be run as a part of the calculation project. This file runs as any spec file would. In the code you can see below I've created a environment variable named CALCULATE_COVERAGE that I'm expecting to be true in order for the calculateYamlCoverage() function to be run.

// tests/featuremap.calculation.ts

import { test as calculation } from "@playwright/test";
import { calculateYamlCoverage } from "feature-map";

calculation("Feature Map", async () => {
  let runCalculationCoverage = process.env.CALCULATE_COVERAGE;
  if (runCalculationCoverage) {
    console.log("Calculating coverage");
    calculateYamlCoverage("./featureMap.yml");
  } else {
    console.log("Skipping coverage calculation");
  }
});

I also went ahead and added the CALCULATE_COVERAGE to true in my .env file so the coverage will always be calculated.

// .env

...
# Calculation Coverage
CALCULATE_COVERAGE=true

With this change the npm library not only logs the output of the coverage in the console but I'm also generating a text file named coverage-output.txt that can be easily viewed on your local or can be saved as an artifact in CI for review. For that I've also updated my .gitignore file to not track this file.

// ./.gitignore

coverage-output.txt

Example of the coverage-output.txt file.

// ./coverage-output.txt

/auth/login page has 50% coverage
/auth/forgot-password page has 0% coverage
/auth/register page has 0% coverage
/category/hand-tools page has 9.09% coverage
/product/{id} page has 0% coverage

Total Product coverage is: 9.09%

The pull request where we implemented the feature-map package can be found below.

Bm/feature map by BMayhew · Pull Request #9 · playwrightsolutions/playwright-practicesoftwaretesting.com
Example using Playwright against site https://practicesoftwaretesting.com - Bm/feature map by BMayhew · Pull Request #9 · playwrightsolutions/playwright-practicesoftwaretesting.com

Future Improvements and Shortcomings

One area where this tool currently falls short is the ability to easily track test scenarios or cases as a part of the feature map. I know this is something that could be added with a bit of effort. If you want to take a stab at it, feel free to submit a pull request - https://github.com/playwrightsolutions/feature-map. Currently everything is categorized as features.

I could see a future where the tool itself could dynamically scrape the webpage, and automatically build out/add to the featureMap.yml file. In my mind this would be a more of a 'change detector' to help ensure that we have coverage over new areas of the system as we continue to add/maintain our automated tests.

This leads to the shortcoming of keeping the featureMap.yml file up to date with the actual reality of the front end of the website under test. It requires a good bit of diligence to go in and update the featureMap.yml file to have it match reality without developers involved, or every ticket going through the quality team before being shipped. I have found that when kicking off a project, creation a "what we want to add automation around" as a specific phase in a project has been a good cadence of doing a full site check to ensure the featureMap is up to date. Typically this is once a quarter or so for the recent projects I've been leading.

In the future I could see this being something that could be tracked based on specific names, or descriptions in tests. This would eliminate the need to add true and false to the yml file manually but rather have that automatically calculated based off of the details within a test.

Another shortcoming is there is only a true or false value. This means it's difficult to actually communicate the depth of the coverage against a certain action on a webpage. I'm ok not knowing the depth, but if you want to communicate this information, you'll have to come up with a different way to do that.

The final shortcoming builds on the only true or false values again. Ideally if there is functionality that we don't plan to add UI automation coverage around, it would be nice to add that to the yaml file and have a different status such as omitted indicating that we want to purposely not add automation around this in the UI (maybe it's covered at a lower level) and have that status not count against the overall coverage percentage.

Wrap Up

I do find that this tool can be really valuable in communicating overall status and coverage of UI automation test coverage. It's all a part of the repository, and can get updated along with UI automation specs as they are committed. I've found it useful during my code reviews to always check the UI coverage against the featureMap.yml during my review to ensure that the file was updated with any changes as necessary.

If you found this article helpful or you have a better way to track this data please do connect with me on LinkedIn, and message me about your ideas!


Thanks for reading! If you found this helpful, reach out and let me know on LinkedIn or consider buying me a cup of coffee. If you want more content delivered to you in your inbox subscribe below, and be sure to leave a ❤️ to show some love.