Jest Test Analytics Dashboard
You're tasked with building a system to analyze Jest test results and provide insightful analytics. This will help development teams understand test execution times, identify flaky tests, and monitor overall test suite health.
Problem Description
Your goal is to create a Jest reporter that collects and processes test results, then generates a summary report. This report should include metrics such as total test execution time, average test duration, and a list of tests that have run for an unusually long time or have failed inconsistently.
Key Requirements
- Custom Jest Reporter: Implement a custom Jest reporter that hooks into Jest's test lifecycle events.
- Data Collection: Capture relevant data for each test, including:
- Test name
- Execution time
- Status (passed, failed, skipped, todo)
- Any error messages for failed tests
- Analytics Calculation:
- Calculate the total execution time of all tests.
- Calculate the average execution time per test.
- Identify "slow" tests (e.g., tests taking longer than a configurable threshold).
- Identify "flaky" tests (e.g., tests that have failed and passed multiple times within a given set of runs – for this challenge, we'll simplify this by identifying tests that failed at least once).
- Report Generation: Output a human-readable summary report in a structured format (e.g., JSON or a formatted string).
Expected Behavior
When Jest runs with your custom reporter, it should execute all tests as usual. After all tests are completed, your reporter should output the generated analytics report to the console or a specified file.
Edge Cases
- No tests found: The reporter should gracefully handle scenarios where no tests are executed.
- All tests pass/fail: The analytics should still be meaningful.
- Long test names: Ensure the report can handle and display long test names correctly.
Examples
Example 1: Basic Test Run
Consider a simple Jest configuration where reporter: ["./my-reporter.ts"] is set.
Input (Jest Execution Context): Assume Jest runs a suite with the following tests:
test('adds 1 + 2 to equal 3', () => expect(1 + 2).toBe(3))- duration: 50ms, status: passedtest('subtracts 5 - 3 to equal 2', () => expect(5 - 3).toBe(2))- duration: 30ms, status: passedtest('should eventually resolve', async () => await new Promise(resolve => setTimeout(resolve, 100)), 100)- duration: 120ms, status: passed
Output (Analytics Report - JSON format):
{
"totalTests": 3,
"passedTests": 3,
"failedTests": 0,
"skippedTests": 0,
"todoTests": 0,
"totalExecutionTime": 200,
"averageExecutionTime": 66.67,
"slowTests": [
{
"name": "should eventually resolve",
"duration": 120
}
],
"flakyTests": []
}
Explanation:
totalTests: 3 (sum of all tests run)passedTests,failedTests, etc.: Counts based on test status.totalExecutionTime: 50ms + 30ms + 120ms = 200msaverageExecutionTime: 200ms / 3 tests = 66.67ms (rounded)slowTests: The test "should eventually resolve" took 120ms, which is above a hypothetical threshold (e.g., 100ms).flakyTests: No tests failed in this run, so this array is empty.
Example 2: Run with Failures
Input (Jest Execution Context):
test('adds 1 + 2 to equal 3', () => expect(1 + 2).toBe(3))- duration: 40ms, status: passedtest('multiplies 2 * 3 to equal 6', () => expect(2 * 3).toBe(7))- duration: 35ms, status: failedtest('division by zero', () => expect(10 / 0).toBe(Infinity))- duration: 25ms, status: passed
Output (Analytics Report - JSON format):
{
"totalTests": 3,
"passedTests": 2,
"failedTests": 1,
"skippedTests": 0,
"todoTests": 0,
"totalExecutionTime": 100,
"averageExecutionTime": 33.33,
"slowTests": [],
"flakyTests": [
{
"name": "multiplies 2 * 3 to equal 6",
"duration": 35
}
]
}
Explanation:
failedTests: 1, for the "multiplies 2 * 3 to equal 6" test.totalExecutionTime: 40ms + 35ms + 25ms = 100ms.averageExecutionTime: 100ms / 3 = 33.33ms.slowTests: No tests exceeded the hypothetical 100ms threshold.flakyTests: The "multiplies 2 * 3 to equal 6" test failed. In a real-world scenario, you'd track this over multiple runs. For this challenge, a single failure marks it as potentially flaky.
Example 3: Edge Case - No Tests
Input (Jest Execution Context): Jest runs, but no tests are discovered or executed.
Output (Analytics Report - JSON format):
{
"totalTests": 0,
"passedTests": 0,
"failedTests": 0,
"skippedTests": 0,
"todoTests": 0,
"totalExecutionTime": 0,
"averageExecutionTime": 0,
"slowTests": [],
"flakyTests": []
}
Explanation: An empty report is generated when no tests are run.
Constraints
- The reporter must be implemented in TypeScript.
- The reporter should integrate with Jest's reporter API.
- The analytics report should include all fields specified in the examples.
- The slow test threshold should be configurable (e.g., via a Jest config option or a default value). For simplicity, let's use a default of
100ms. - The definition of "flaky" for this challenge is simplified: a test is considered "flaky" if it failed at least once during the current test run.
Notes
- You'll need to consult the Jest documentation for implementing custom reporters and understanding the reporter API (e.g.,
onRunComplete,onTestResult). - Consider how you will structure your reporter class and manage the state of test results.
- For the "flakyTests" definition, focus on identifying tests that failed in this specific run. A more robust solution would involve storing historical results, but that's beyond the scope of this challenge.
- You can use
console.logto output your final report. If you want to output to a file, Jest'sonRunCompletehook can access configuration options where you might specify an output file path. - Feel free to choose a JSON output format for your report, as it's structured and easy to parse.