File size: 11,180 Bytes
12aabae
e677444
40ce582
e677444
e4b8765
40ce582
e4b8765
e677444
e4b8765
d4d7ccc
e4b8765
 
40ce582
b796a55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d4d7ccc
b796a55
 
12aabae
e4b8765
 
 
 
 
 
 
 
 
 
 
 
12aabae
40ce582
e4b8765
 
 
 
 
 
 
07aeca6
e4b8765
 
40ce582
e4b8765
 
 
e677444
e4b8765
40ce582
e677444
40ce582
 
 
e4b8765
 
 
 
 
 
 
 
 
 
 
 
e677444
e4b8765
 
 
 
 
 
d4d7ccc
 
e4b8765
d4d7ccc
 
 
e4b8765
e677444
e4b8765
 
e677444
 
 
 
e4b8765
e677444
 
 
 
e4b8765
d4d7ccc
 
 
e4b8765
e677444
d4d7ccc
e4b8765
e677444
e4b8765
d4d7ccc
1ab4db2
d4d7ccc
12aabae
e4b8765
d4d7ccc
e4b8765
 
 
 
d4d7ccc
e4b8765
c9b80e5
e4b8765
 
 
 
 
 
d4d7ccc
e4b8765
d4d7ccc
 
 
 
 
e4b8765
 
d4d7ccc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e4b8765
5229d3d
d4d7ccc
 
 
 
 
 
 
 
 
 
e4b8765
d4d7ccc
 
e4b8765
d4d7ccc
 
 
 
 
 
 
e4b8765
d4d7ccc
 
 
 
 
 
 
 
e4b8765
 
 
12aabae
 
e4b8765
5229d3d
e4b8765
 
5229d3d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
import gradio as gr
from openai import OpenAI
import os
import time
from typing import List, Tuple, Generator

# Constants
MODEL_ID = "Qwen/Qwen2.5-Coder-32B-Instruct"
HF_API_TOKEN = os.getenv("HF_TOKEN")
DEFAULT_TEMPERATURE = 0.1
MAX_TOKENS_LIMIT = 8192
STREAM_DELAY = 0.015

DEFAULT_SYSTEM_PROMPT = """
You are an expert software testing agent specializing in designing comprehensive test strategies and writing high-quality automated test scripts. Your role is to assist developers, product managers, and quality assurance teams by analyzing features, branch names, or explanations to produce detailed, effective test cases. You excel in identifying edge cases, ensuring robust test coverage, and delivering Playwright test scripts in JavaScript.
Capabilities:
Feature Understanding:
Analyze the feature description, branch name, or user explanation to extract its purpose, expected behavior, and key functionality.
Infer implicit requirements and edge cases that might not be explicitly mentioned.
Test Case Generation:
Design manual test cases for functional, non-functional, and exploratory testing. These should include:
Positive test cases (expected behavior).
Negative test cases (handling invalid inputs or unexpected conditions).
Edge cases (extreme or boundary conditions).
Performance and security-related scenarios, if applicable.
Write automated test cases in Playwright using JavaScript that adhere to modern testing standards.
Playwright Expertise:
Generate Playwright test scripts with modular, reusable code that follows best practices for maintainability and readability.
Use robust selectors (data attributes preferred) and implement techniques like handling asynchronous operations, mocking API responses, and parameterized testing where applicable.
Write test scripts with proper comments, error handling, and clear structure.
Coverage Prioritization:
Focus on high-priority areas like critical user flows, core functionality, and areas prone to failure.
Ensure comprehensive coverage for edge cases to make the system resilient.
Response Guidelines:
Context Analysis:
If the user provides a branch name, infer the feature or functionality it relates to and proceed to generate test cases.
If the user provides a feature explanation, ensure your test cases align with the described functionality and its goals.
Ask clarifying questions if necessary to improve your understanding before generating test cases.
Structured Output:
Start with a brief summary of the feature or inferred functionality based on the input.
Present manual test cases first, with a clear numbering format and detailed steps for testers to follow.
Follow with automated Playwright test scripts, formatted with proper indentation and ready for execution.
Test Cases Format:
Manual Test Cases:
ID: Test case identifier (e.g., TC001).
Title: Clear and descriptive title.
Precondition(s): Any setup required before execution.
Steps: Step-by-step instructions for execution.
Expected Result: The expected outcome of the test.
Playwright Automated Test Cases:
Include setup (browser context and page), reusable utility functions, and parameterized test cases where applicable.
Ensure clear commenting for each section of the script.
Best Practices:
Recommend improvements to testability if the input feature is unclear or incomplete.
Provide tips for maintaining the test suite, such as organizing tests by feature or tagging tests for easy execution.
Sample Output Template:
Feature Summary:
A concise summary of the feature or inferred functionality based on the user input.
Manual Test Cases:
vbnet
```
TC001: Verify successful login with valid credentials  
Precondition(s): The user must have a valid account.  
Steps:  
  1. Navigate to the login page.  
  2. Enter valid username and password.  
  3. Click on the "Login" button.  
Expected Result: The user is redirected to the dashboard.  
Automated Playwright Test Case (JavaScript):
```
javascript
```
const { test, expect } = require('@playwright/test');
test.describe('Login Feature Tests', () => {
  test('Verify successful login with valid credentials', async ({ page }) => {
    // Navigate to the login page
    await page.goto('https://example.com/login');
    
    // Enter credentials
    await page.fill('#username', 'testuser');
    await page.fill('#password', 'password123');
    
    // Click the login button
    await page.click('button#login');
    
    // Assert redirection to dashboard
    await expect(page).toHaveURL('https://example.com/dashboard');
  });
  test('Verify login fails with invalid credentials', async ({ page }) => {
    // Navigate to the login page
    await page.goto('https://example.com/login');
    
    // Enter invalid credentials
    await page.fill('#username', 'invaliduser');
    await page.fill('#password', 'wrongpassword');
    
    // Click the login button
    await page.click('button#login');
    
    // Assert error message is displayed
    const errorMessage = await page.locator('.error-message');
    await expect(errorMessage).toHaveText('Invalid username or password.');
  });
});
```
With this structure, you’ll provide detailed, high-quality test plans that are both actionable and easy to implement. Let me know if you'd like additional examples or refinements!
Ensure you follow user instructions.

"""

FORMATTING_TAGS = [
    "[Understand]", "[Plan]", "[Conclude]", 
    "[Reason]", "[Verify]", "[Capabilities]",
    "[Response Guidelines]"
]

def initialize_client() -> OpenAI:
    """Initialize and return the OpenAI client with Hugging Face configuration."""
    return OpenAI(
        base_url="https://api-inference.huggingface.co/v1/",
        api_key=HF_API_TOKEN
    )

def format_response(text: str) -> str:
    """Apply HTML formatting to special tags in the response text."""
    for tag in FORMATTING_TAGS:
        text = text.replace(
            tag, 
            f'<strong class="special-tag">{tag}</strong>'
        )
    return text

def construct_messages(
    user_input: str,
    chat_history: List[Tuple[str, str]],
    system_prompt: str
) -> List[dict]:
    """Construct the message history for the API request."""
    messages = [{"role": "system", "content": system_prompt}]
    
    for user_msg, bot_msg in chat_history:
        messages.extend([
            {"role": "user", "content": user_msg},
            {"role": "assistant", "content": bot_msg}
        ])
    
    messages.append({"role": "user", "content": user_input})
    return messages

def handle_api_error(e: Exception) -> str:
    """Generate user-friendly error messages for different error types."""
    error_type = type(e).__name__
    if "Authentication" in str(e):
        return "🔒 Authentication Error: Check your API token"
    elif "Timeout" in str(e):
        return "⏳ Request Timeout: Try again later"
    return f"⚠️ Error ({error_type}): {str(e)}"

def generate_response(
    message: str,
    chat_history: List[Tuple[str, str]],
    system_prompt: str,
    temperature: float,
    max_tokens: int
) -> Generator[Tuple[List[Tuple[str, str]], List[Tuple[str, str]]], None, None]:
    """Generate streaming response with full conversation context."""
    client = initialize_client()
    updated_history = chat_history.copy()
    updated_history.append((message, ""))
    partial_response = ""
    
    try:
        messages = construct_messages(message, chat_history, system_prompt)
        
        stream = client.chat.completions.create(
            model=MODEL_ID,
            messages=messages,
            temperature=temperature,
            max_tokens=min(max_tokens, MAX_TOKENS_LIMIT),
            stream=True
        )

        for chunk in stream:
            if chunk.choices[0].delta.content:
                partial_response += chunk.choices[0].delta.content
                updated_history[-1] = (message, format_response(partial_response + "▌"))
                yield updated_history, updated_history
                time.sleep(STREAM_DELAY)

        updated_history[-1] = (message, format_response(partial_response))
        
    except Exception as e:
        error_msg = handle_api_error(e)
        updated_history[-1] = (message, error_msg)
    
    yield updated_history, updated_history

def create_interface() -> gr.Blocks:
    """Create and configure the Gradio interface with enhanced history management."""
    css = """
    .gr-chatbot { min-height: 500px; border-radius: 15px; }
    .special-tag { color: #2ecc71; font-weight: 600; }
    footer { visibility: hidden; }
    .error { color: #e74c3c !important; }
    """
    
    with gr.Blocks(css=css, theme=gr.themes.Soft()) as interface:
        gr.Markdown("""
        # 🧠 AI Test Engineering Assistant
        ## Specialized in Automated Testing Strategies
        """)
        
        stored_history = gr.State([])
        chatbot = gr.Chatbot(label="Testing Discussion", elem_classes="gr-chatbot")
        user_input = gr.Textbox(
            label="Feature Description",
            placeholder="Describe feature or paste branch name...",
            max_lines=5
        )
        
        with gr.Accordion("Engine Parameters", open=False):
            system_prompt = gr.TextArea(
                value=DEFAULT_SYSTEM_PROMPT, 
                label="System Instructions",
                max_lines=15
            )
            temperature = gr.Slider(
                0, 1, 
                value=DEFAULT_TEMPERATURE, 
                label="Creativity Level",
                info="Lower = More Factual, Higher = More Creative"
            )
            max_tokens = gr.Slider(
                128, MAX_TOKENS_LIMIT, 
                value=8192, 
                label="Response Length",
                step=128
            )
        
        with gr.Row():
            clear_btn = gr.Button("🧹 Clear History", variant="secondary")
            submit_btn = gr.Button("🚀 Generate Tests", variant="primary")

        # Event handling with proper history management
        msg_submit = user_input.submit(
            fn=lambda msg, hist: hist + [(msg, "")],
            inputs=[user_input, stored_history],
            outputs=[stored_history],
            queue=False
        ).then(
            generate_response,
            [user_input, stored_history, system_prompt, temperature, max_tokens],
            [chatbot, stored_history]
        )

        btn_click = submit_btn.click(
            fn=lambda msg, hist: hist + [(msg, "")],
            inputs=[user_input, stored_history],
            outputs=[stored_history],
            queue=False
        ).then(
            generate_response,
            [user_input, stored_history, system_prompt, temperature, max_tokens],
            [chatbot, stored_history]
        )

        clear_btn.click(
            fn=lambda: ([], []),
            outputs=[chatbot, stored_history],
            queue=False
        )

    return interface

if __name__ == "__main__":
    if not HF_API_TOKEN:
        raise ValueError("HF_API_TOKEN environment variable not set - add it in Spaces settings!")
    
    interface = create_interface()
    interface.launch(server_name="0.0.0.0", server_port=7860)