VibeShift
AI-powered agent that streamlines web testing workflows by allowing developers to record, execute, and discover tests using natural language prompts in their AI coding assistants.
AI-powered agent that streamlines web testing workflows by allowing developers to record, execute, and discover tests using natural language prompts in their AI coding assistants.
This project provides an AI-powered agent designed to streamline web testing workflows, particularly for developers using AI coding assistants like GitHub Copilot, Cursor, Roo Code, etc. It integrates directly into these assistants via the MCP (Machine Command Protocol), allowing you to automate test recording, execution, and discovery using natural language prompts.
The Problem: Manually testing web applications after generating code with AI assistants is time-consuming and error-prone. Furthermore, AI-driven code changes can inadvertently introduce regressions in previously working features.
The Solution: This tool bridges the gap by enabling your AI coding assistant to:
This creates a tighter feedback loop, automating the testing process and allowing the AI assistant (and the developer) to quickly identify and fix issues or regressions.
Check if the text is overflowing in the div
+-------------+ +-----------------+ +---------------------+ +-----------------+ +---------+
| User | ----> | AI Coding Agent | ----> | MCP Server | ----> | Web Test Agent | ----> | Browser |
| (Developer) | | (e.g., Copilot) | | (mcp_server.py) | | (agent/executor)| | (Playwright)|
+-------------+ +-----------------+ +---------------------+ +-----------------+ +---------+
^ | | |
|--------------------------------------------------+----------------------------+---------------------+
[Test Results / Feedback]
MCP Server
.record_test_flow
, run_regression_test
, discover_test_flows
, list_recorded_tests
).WebAgent
(in automated mode) interacts with the LLM to plan steps, controls the browser via BrowserController
(Playwright), processes HTML/Vision, and saves the resulting test steps to a JSON file in the output/
directory.TestExecutor
loads the specified JSON test file, uses BrowserController
to interact with the browser according to the recorded steps, and captures results, screenshots, and console logs.CrawlerAgent
uses BrowserController
and LLMClient
to crawl pages and suggest test steps.pip install mcp[cli]
)playwright install
)git clone <repository-url>
cd <repository-name>
python -m venv venv
source venv/bin/activate # Linux/macOS
# venvScriptsactivate # Windows
pip install -r requirements.txt
playwright install --with-deps # Installs browsers and OS dependencies
# .env
LLM_API_KEY="YOUR_LLM_API_KEY"
YOUR_LLM_API_KEY
with your actual key.Add this to you mcp config:
{
"mcpServers": {
"Web-QA":{
"command": "uv",
"args": ["--directory","path/to/cloned_repo", "run", "mcp_server.py"]
}
}
}
Keep this server running while you interact with your AI coding assistant.
Interact with the agent through your MCP-enabled AI coding assistant using natural language.
Examples:
Record a Test: > "Record a test: go to https://practicetestautomation.com/practice-test-login/, type student into the username field, type Password123 into the password field, click the submit button, and verify the text Congratulations student is visible."
test_....json
file in output/
)Execute a Test:
> "Run the regression test output/test_practice_test_login_20231105_103000.json
"
Discover Test Steps: > "Discover potential test steps starting from https://practicetestautomation.com/practice/"
List Recorded Tests: > "List the available recorded web tests."
.json
files found in the output/
directory.)Output:
output/
directory (see test_schema.md
for format).output/execution_result_....json
.output/discovery_results_....json
.Contributions are welcome! Please see CONTRIBUTING.md for details on how to get started, report issues, and submit pull requests.
This project is licensed under the APACHE-2.0.