Skip to content

Commit bc2f4af

Browse files
author
ChidcGithub
committed
release: v1.0.0-beta - LLM AI Refactoring & Bug Fixes
- Redesigned AI explanation panel with black/white minimalist theme - Added unavailable state UI when LLM not configured - Improved prompt templates with clearer JSON output format - Multi-strategy JSON parsing with fallback mechanisms - Thread-safe caching with TTL for explanations - Case-insensitive model name matching - Fixed model not loading when clicking Load button - Fixed UI stuck at loading state - Fixed false 'loaded' display when model not actually loaded - Fixed React Hooks error (useMemo order) - Added custom event system for cross-component communication
1 parent e1b0c2b commit bc2f4af

24 files changed

Lines changed: 3115 additions & 1426 deletions

README.md

Lines changed: 54 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@
1313

1414
# PyVizAST
1515

16-
[![Version](https://img.shields.io/badge/Version-1.0.0--alpha-blue.svg)](https://github.com/ChidcGithub/PyVizAST)
16+
[![Version](https://img.shields.io/badge/Version-1.0.0--beta-blue.svg)](https://github.com/ChidcGithub/PyVizAST)
1717
[![Python](https://img.shields.io/badge/Python-3.9%2B-brightgreen.svg)](https://www.python.org/)
1818
[![License](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](LICENSE)
1919
[![Platform](https://img.shields.io/badge/Platform-Windows%20%7C%20Linux%20%7C%20macOS-lightgrey.svg)](https://github.com/ChidcGithub/PyVizAST)
20-
[![Status](https://img.shields.io/badge/Status-alpha-red.svg)](https://github.com/ChidcGithub/PyVizAST)
20+
[![Status](https://img.shields.io/badge/Status-beta-orange.svg)](https://github.com/ChidcGithub/PyVizAST)
2121
![CI Build Status](https://github.com/ChidcGithub/PyVizAST/actions/workflows/ci.yml/badge.svg)
2222

2323
A Python AST Visualizer & Static Analyzer that transforms code into interactive graphs. Detect complexity, performance bottlenecks, and code smells with actionable refactoring suggestions.
@@ -48,7 +48,7 @@ A Python AST Visualizer & Static Analyzer that transforms code into interactive
4848
- **Beginner Mode**: Display Python documentation when hovering over AST nodes
4949
- **Challenge Mode**: Identify performance issues in provided code samples
5050

51-
### LLM AI Features (v1.0.0-alpha)
51+
### LLM AI Features (v1.0.0-beta)
5252
- **Local LLM Integration**: Powered by Ollama for privacy-first AI features
5353
- **Auto Install Ollama**: One-click automatic Ollama installation and configuration
5454
- **AI Node Explanations**: Get intelligent explanations for any AST node
@@ -235,6 +235,57 @@ Contributions are welcome. Please submit pull requests to the main repository.
235235

236236
<summary>Version History</summary>
237237

238+
<details>
239+
<summary>v1.0.0-beta (2026-03-21)</summary>
240+
241+
**LLM AI Refactoring & Bug Fixes**
242+
243+
**LLM Explanation Panel Refactoring:**
244+
- Redesigned AI explanation panel with premium black/white minimalist theme
245+
- Added unavailable state UI with helpful messages when LLM not configured
246+
- Added fullscreen modal for detailed reading
247+
- Improved loading states and error handling
248+
- Auto-retry on failure (up to 2 times)
249+
250+
**Backend LLM Service Refactoring:**
251+
- Improved prompt templates with clearer JSON output format requirements
252+
- Multi-strategy JSON parsing with fallback mechanisms
253+
- Thread-safe caching with TTL for explanation caching
254+
- Case-insensitive model name matching (codeLlama:7b vs codellama:7b)
255+
- Separated error handling for availability check and model listing
256+
- Added shorter timeouts to avoid UI hanging
257+
258+
**Frontend LLM Integration Improvements:**
259+
- Added custom event system (`llmConfigChanged`) for cross-component communication
260+
- Fixed React Hooks order issue (useMemo before early return)
261+
- Fixed incorrect default status value (`'ready'``'unavailable'`)
262+
- Improved SSE parsing with type annotations
263+
- Better error feedback and loading states
264+
265+
**Files Added:**
266+
- `frontend/src/components/LLMExplanationPanel.js` - AI explanation panel component
267+
- `frontend/src/components/LLMExplanationPanel.css` - Black/white minimalist styles
268+
269+
**Files Modified:**
270+
- `backend/llm/prompts.py` - Improved prompt templates
271+
- `backend/llm/service.py` - Multi-strategy parsing, caching, status handling
272+
- `backend/llm/ollama_client.py` - Shorter timeouts
273+
- `backend/routers/llm.py` - Unified error responses, detailed logging
274+
- `frontend/src/api.js` - SSE parsing, shorter timeouts
275+
- `frontend/src/components/LLMSettings.js` - Event dispatching, improved status handling
276+
- `frontend/src/components/ASTVisualizer.js` - Event listeners, fixed default status
277+
- `frontend/src/components/ASTVisualizer3D.js` - Event listeners, fixed default status
278+
- `frontend/src/components/components.css` - LLM explanation panel styles
279+
280+
**Bug Fixes:**
281+
- Fixed model not loading when clicking Load button
282+
- Fixed UI stuck at loading state (timeout optimization)
283+
- Fixed false "loaded" display when model not actually loaded
284+
- Fixed React Hooks error (useMemo called after early return)
285+
- Fixed model name case sensitivity matching
286+
287+
</details>
288+
238289
<details>
239290
<summary>v1.0.0-alpha (2026-03-16)</summary>
240291

backend/ast_parser/node_builder.py

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -262,18 +262,6 @@ def _count_structures(self, node: ast.AST) -> Dict[str, int]:
262262

263263
return counts
264264

265-
def _count_branches(self, node: ast.AST) -> int:
266-
"""Count number of if/elif/else branches - deprecated, use _count_structures"""
267-
return self._count_structures(node)['branches']
268-
269-
def _count_loops(self, node: ast.AST) -> int:
270-
"""Count number of loops - deprecated, use _count_structures"""
271-
return self._count_structures(node)['loops']
272-
273-
def _count_exception_handlers(self, node: ast.AST) -> int:
274-
"""Count number of except handlers - deprecated, use _count_structures"""
275-
return self._count_structures(node)['exception_handlers']
276-
277265
def _generate_detailed_label(self, ast_node: ast.AST, node_type: NodeType,
278266
name: Optional[str], attributes: Dict[str, Any]) -> str:
279267
"""Generate detailed node label for better understanding"""

backend/config.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,6 @@
33
All version numbers are managed in this single file for easy updates.
44
"""
55

6-
VERSION = "1.0.0-alpha"
7-
BUILD = "3230"
6+
VERSION = "1.0.0-beta"
7+
BUILD = "3300"
88
FULL_VERSION = f"v{VERSION}"

backend/llm/ollama_client.py

Lines changed: 107 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
import logging
1010
import httpx
1111
from typing import List, Optional, Dict, Any, AsyncIterator
12+
import atexit
1213

1314
from .models import ModelInfo, LLMConfig
1415

@@ -21,27 +22,97 @@ class OllamaError(Exception):
2122

2223

2324
class OllamaClient:
24-
"""Client for Ollama API"""
25+
"""Client for Ollama API with connection pooling"""
26+
27+
# Shared client instance for connection pooling
28+
_shared_client: Optional[httpx.AsyncClient] = None
29+
30+
@classmethod
31+
def _get_shared_client(cls, timeout: httpx.Timeout) -> httpx.AsyncClient:
32+
"""Get or create shared HTTP client for connection pooling"""
33+
if cls._shared_client is None or cls._shared_client.is_closed:
34+
cls._shared_client = httpx.AsyncClient(
35+
timeout=timeout,
36+
limits=httpx.Limits(
37+
max_connections=10,
38+
max_keepalive_connections=5,
39+
keepalive_expiry=30.0
40+
)
41+
)
42+
return cls._shared_client
43+
44+
@classmethod
45+
def close_shared_client(cls):
46+
"""Close shared client (call on shutdown)
47+
48+
This method is registered with atexit but may not work reliably
49+
for async clients. Prefer using shutdown_async() in FastAPI lifespan.
50+
"""
51+
if cls._shared_client and not cls._shared_client.is_closed:
52+
import asyncio
53+
client = cls._shared_client
54+
cls._shared_client = None # Clear reference first to prevent double close
55+
56+
try:
57+
# Try to get running loop
58+
loop = asyncio.get_running_loop()
59+
# If we have a running loop, schedule the close
60+
loop.call_soon_threadsafe(lambda: loop.create_task(client.aclose()))
61+
except RuntimeError:
62+
# No running loop available
63+
try:
64+
# Try to create a new loop and run the close
65+
loop = asyncio.new_event_loop()
66+
asyncio.set_event_loop(loop)
67+
try:
68+
loop.run_until_complete(client.aclose())
69+
finally:
70+
loop.close()
71+
except Exception as e:
72+
# Last resort: just set to None and let GC handle it
73+
logger.debug(f"Could not close HTTP client gracefully: {e}")
74+
75+
@classmethod
76+
async def shutdown_async(cls):
77+
"""Async shutdown for use in FastAPI lifespan context"""
78+
if cls._shared_client and not cls._shared_client.is_closed:
79+
client = cls._shared_client
80+
cls._shared_client = None
81+
try:
82+
await client.aclose()
83+
logger.debug("HTTP client closed successfully")
84+
except Exception as e:
85+
logger.warning(f"Error closing HTTP client: {e}")
2586

2687
def __init__(self, config: LLMConfig):
2788
self.config = config
2889
self.base_url = config.base_url.rstrip("/")
2990
self.timeout = httpx.Timeout(config.timeout)
3091

92+
@property
93+
def client(self) -> httpx.AsyncClient:
94+
"""Get shared HTTP client"""
95+
return self._get_shared_client(self.timeout)
96+
3197
async def is_available(self) -> bool:
3298
"""Check if Ollama server is running"""
3399
try:
100+
# Use a quick timeout for availability check
34101
async with httpx.AsyncClient(timeout=5.0) as client:
35102
response = await client.get(f"{self.base_url}/api/tags")
36103
return response.status_code == 200
37-
except Exception as e:
104+
except (httpx.ConnectError, httpx.TimeoutException, httpx.NetworkError) as e:
38105
logger.debug(f"Ollama not available: {e}")
39106
return False
107+
except httpx.HTTPStatusError as e:
108+
logger.warning(f"Ollama returned error status: {e}")
109+
return False
40110

41111
async def list_models(self) -> List[ModelInfo]:
42112
"""List all pulled models"""
43113
try:
44-
async with httpx.AsyncClient(timeout=self.timeout) as client:
114+
# Use a shorter timeout for listing models
115+
async with httpx.AsyncClient(timeout=10.0) as client:
45116
response = await client.get(f"{self.base_url}/api/tags")
46117
response.raise_for_status()
47118
data = response.json()
@@ -56,21 +127,23 @@ async def list_models(self) -> List[ModelInfo]:
56127
details=model.get("details")
57128
))
58129
return models
130+
except (httpx.ConnectError, httpx.TimeoutException, httpx.NetworkError) as e:
131+
logger.debug(f"Failed to list models (network error): {e}")
132+
raise OllamaError(f"Failed to connect to Ollama: {e}")
59133
except httpx.HTTPError as e:
60134
logger.error(f"Failed to list models: {e}")
61135
raise OllamaError(f"Failed to list models: {e}")
62136

63137
async def get_model_info(self, model_name: str) -> Optional[Dict[str, Any]]:
64138
"""Get information about a specific model"""
65139
try:
66-
async with httpx.AsyncClient(timeout=self.timeout) as client:
67-
response = await client.post(
68-
f"{self.base_url}/api/show",
69-
json={"name": model_name}
70-
)
71-
if response.status_code == 200:
72-
return response.json()
73-
return None
140+
response = await self.client.post(
141+
f"{self.base_url}/api/show",
142+
json={"name": model_name}
143+
)
144+
if response.status_code == 200:
145+
return response.json()
146+
return None
74147
except Exception as e:
75148
logger.error(f"Failed to get model info: {e}")
76149
return None
@@ -99,12 +172,11 @@ async def pull_model(self, model_name: str) -> AsyncIterator[Dict[str, Any]]:
99172
async def delete_model(self, model_name: str) -> bool:
100173
"""Delete a model"""
101174
try:
102-
async with httpx.AsyncClient(timeout=self.timeout) as client:
103-
response = await client.delete(
104-
f"{self.base_url}/api/delete",
105-
json={"name": model_name}
106-
)
107-
return response.status_code == 200
175+
response = await self.client.delete(
176+
f"{self.base_url}/api/delete",
177+
json={"name": model_name}
178+
)
179+
return response.status_code == 200
108180
except Exception as e:
109181
logger.error(f"Failed to delete model: {e}")
110182
return False
@@ -135,14 +207,13 @@ async def generate(
135207
payload["system"] = system
136208

137209
try:
138-
async with httpx.AsyncClient(timeout=self.timeout) as client:
139-
response = await client.post(
140-
f"{self.base_url}/api/generate",
141-
json=payload
142-
)
143-
response.raise_for_status()
144-
data = response.json()
145-
return data.get("response", "")
210+
response = await self.client.post(
211+
f"{self.base_url}/api/generate",
212+
json=payload
213+
)
214+
response.raise_for_status()
215+
data = response.json()
216+
return data.get("response", "")
146217
except httpx.HTTPError as e:
147218
logger.error(f"Generation failed: {e}")
148219
raise OllamaError(f"Generation failed: {e}")
@@ -213,14 +284,17 @@ async def chat(
213284
}
214285

215286
try:
216-
async with httpx.AsyncClient(timeout=self.timeout) as client:
217-
response = await client.post(
218-
f"{self.base_url}/api/chat",
219-
json=payload
220-
)
221-
response.raise_for_status()
222-
data = response.json()
223-
return data.get("message", {}).get("content", "")
287+
response = await self.client.post(
288+
f"{self.base_url}/api/chat",
289+
json=payload
290+
)
291+
response.raise_for_status()
292+
data = response.json()
293+
return data.get("message", {}).get("content", "")
224294
except httpx.HTTPError as e:
225295
logger.error(f"Chat failed: {e}")
226296
raise OllamaError(f"Chat failed: {e}")
297+
298+
299+
# Register cleanup on exit
300+
atexit.register(OllamaClient.close_shared_client)

0 commit comments

Comments
 (0)