# Test Management Dashboard Guide

The **Test Management Dashboard** provides complete visibility into **test case creation, ownership, coverage, execution readiness, and gaps** across projects, sprints, modules, and platforms.

This dashboard helps teams:

* Track test case creation trends
* Measure test coverage at module, platform, and user story levels
* Identify unexecuted or unused test cases
* Analyze tester contribution and execution readiness
* Improve overall test planning and test asset utilization

It is primarily used by **QA Leads, Test Managers, Product Owners, and Delivery Managers** during test planning, sprint reviews, and quality audits.

### How to Use Dashboard Graphs (Common Instructions for All Widgets)

Every widget on the dashboard includes a standard **three-dot (⋮) menu** in the top-right corner. These options behave the same across all graphs.

* **Force Refresh:** Reloads the widget with the latest data. Always use this before reviews or release decisions.
* **Enter Fullscreen:** Expands the graph for better visibility during meetings or presentations.
* **Edit Chart:** Allows authorized users to modify filters, grouping, or metrics.
* **Cross-Filtering Scoping:** Selecting data in one widget automatically filters related widgets for deeper analysis.
* **View Query:** Displays the underlying logic or query used to generate the graph (mainly for admins/audits).
* **View as Table:** Converts the graph into a table to view exact counts and records.
* **Drill to Detail:** Opens individual defect records contributing to the graph.
* **Share:** Generates shareable links or embedded views.
* **Download:** Exports the graph or data as Image, PDF, or CSV/Excel.

***

### Count of Test Case – Module Wise

#### Description

Displays the **number of test cases created per module**.

<figure><img src="/files/XOofK3S8xXNMauuVAqAt" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** Module Name
* **Y-Axis:** Count of Test Cases

#### Interpretation

* Modules with higher test case counts usually represent **complex or high-risk areas**.
* Modules with very few test cases may indicate **insufficient coverage**.

#### Business & Operational Value

* Helps ensure balanced test coverage
* Identifies modules requiring additional test design
* Supports risk-based testing strategies

***

### List of Test Executed – User Wise

#### Description

Shows **test execution contribution by each user**, categorized by execution type.

<figure><img src="/files/QB0N8KgSA1EWex8ZnPnx" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** User Name
* **Y-Axis:** Count of Test Executions

#### Execution Type Classification

* Manual
* Automation
* DDU (if applicable)

#### Interpretation

* Highlights individual tester contribution
* Shows balance between manual and automated testing

#### Business & Operational Value

* Supports workload analysis
* Helps identify automation adoption
* Assists in performance reviews

***

### User-wise Count of Test Case Executions – Status Wise

#### Description

Displays **execution outcomes per user**, grouped by execution status.

<figure><img src="/files/JlZuBB6HNGFh8F0cBC9s" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** User Name
* **Y-Axis:** Count of Test Case Executions

#### Status Classification

* Passed
* Failed

#### Interpretation

* High pass count indicates stable execution areas
* High fail count may point to unstable builds or complex scenarios

#### Business & Operational Value

* Identifies execution risk areas
* Supports capacity and ownership analysis

***

### Count of Test Case Creator – User Wise

#### Description

Shows **test case creation contribution by users**, segmented by platform.

<figure><img src="/files/EYUI6Nbj64esvWyHm0AT" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** User Name
* **Y-Axis:** Count of Test Cases Created

#### Platform Classification

* Web
* Android
* iOS

#### Interpretation

* Highlights users contributing heavily to test design
* Reveals platform-specific expertise

#### Business & Operational Value

* Encourages balanced test creation
* Helps identify skill specialization

***

### List of Test Created in Month / Quarter – User Wise

#### Description

Provides a **tabular view of test cases created** in a specific time period by users.

<figure><img src="/files/gtoWqDJ1Ps0YPaBhJQhI" alt="" width="375"><figcaption></figcaption></figure>

#### Data Fields Explained

* Project ID
* Project Name
* Test Case ID
* Test Case Name

#### Interpretation

Helps track test creation velocity and timeline.

#### Business & Operational Value

* Supports audit and compliance needs
* Measures test design throughput

***

### Count of Test Case – User Story Wise

#### Description

Displays **distribution of test cases mapped to user stories**.

<figure><img src="/files/ZLQpVvUZ9fOVlzDC1j3q" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** User Story ID
* **Y-Axis:** Count of Test Cases

#### Interpretation

* User stories with higher test counts are generally more complex
* Stories with very few tests may be under-tested

#### Business & Operational Value

* Validates requirement-to-test coverage
* Improves quality assurance at story level

***

### Count of Test Case – Platform Wise

#### Description

Shows **test case distribution across platforms**.

<figure><img src="/files/9ERGSUITRFkdgkSPJRjp" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** Platform (Web, Android, iOS, API, Database, Mobile, Functional)
* **Y-Axis:** Count of Test Cases

#### Interpretation

* Ensures no platform is under-covered
* Highlights testing focus areas

#### Business & Operational Value

* Supports cross-platform testing strategy
* Helps align testing with product usage patterns

***

### Defect with Status for a Sprint – User Story Wise

#### Description

Displays **defects linked to user stories**, grouped by defect status for a sprint.

<figure><img src="/files/t8NlxRU9XQ1Lm6FcDjUx" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** User Story ID
* **Y-Axis:** Count of Defects

#### Interpretation

* High defect counts against a story indicate requirement or design gaps
* Closed defects indicate stabilization

#### Business & Operational Value

* Evaluates story quality
* Supports root cause analysis

***

### List of Test Created – User Wise (Up)

#### Description

Shows **test cases created by users**, categorized by creation type.

<figure><img src="/files/Qp1MtyyL5QuoZfISnwLY" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** User Name
* **Y-Axis:** Count of Test Cases Created

#### Creation Type

* Manual
* Automation
* DDU

#### Interpretation

Highlights individual contribution to test assets.

#### Business & Operational Value

* Encourages ownership in test design
* Supports automation planning

***

### Test Case Execution Status for a Sprint – User Story Wise

#### Description

Displays **execution status of test cases mapped to user stories** for a sprint.

<figure><img src="/files/xbaIYst8C0kPBzPnynai" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** User Story ID
* **Y-Axis:** Number of Test Cases

#### Interpretation

* Shows execution completeness at story level
* Identifies untested or partially tested stories

#### Business & Operational Value

* Improves sprint exit quality
* Ensures no story moves forward untested

***

### Count of Test Case That Are Never Executed for a Project

#### Description

Displays **test cases that have never been executed**, grouped by project.

<figure><img src="/files/oRqKKJoOCIE5PkC1YAg9" alt="" width="375"><figcaption></figcaption></figure>

#### Axis Details

* **X-Axis:** Project Name
* **Y-Axis:** Count of Never Executed Test Cases

#### Interpretation

* High values indicate unused or obsolete test cases
* Suggests need for test suite cleanup

#### Business & Operational Value

* Optimizes test repository
* Reduces maintenance overhead
* Improves execution efficiency

***

### Key Takeaways & Best Practices

* Ensure **every user story has adequate test coverage**
* Regularly review **never-executed test cases** to remove redundancy
* Balance test creation across **modules and platforms**
* Encourage automation for stable and repetitive scenarios
* Use test management insights alongside **Execution, Defect, and Release dashboards** for full quality visibility


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.simplifyqa.ai/dashboard/test-management-dashboard-guide.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
