You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
release: v1.0.0-beta - LLM AI Refactoring & Bug Fixes
- Redesigned AI explanation panel with black/white minimalist theme
- Added unavailable state UI when LLM not configured
- Improved prompt templates with clearer JSON output format
- Multi-strategy JSON parsing with fallback mechanisms
- Thread-safe caching with TTL for explanations
- Case-insensitive model name matching
- Fixed model not loading when clicking Load button
- Fixed UI stuck at loading state
- Fixed false 'loaded' display when model not actually loaded
- Fixed React Hooks error (useMemo order)
- Added custom event system for cross-component communication
A Python AST Visualizer & Static Analyzer that transforms code into interactive graphs. Detect complexity, performance bottlenecks, and code smells with actionable refactoring suggestions.
@@ -48,7 +48,7 @@ A Python AST Visualizer & Static Analyzer that transforms code into interactive
48
48
-**Beginner Mode**: Display Python documentation when hovering over AST nodes
49
49
-**Challenge Mode**: Identify performance issues in provided code samples
50
50
51
-
### LLM AI Features (v1.0.0-alpha)
51
+
### LLM AI Features (v1.0.0-beta)
52
52
-**Local LLM Integration**: Powered by Ollama for privacy-first AI features
53
53
-**Auto Install Ollama**: One-click automatic Ollama installation and configuration
54
54
-**AI Node Explanations**: Get intelligent explanations for any AST node
@@ -235,6 +235,57 @@ Contributions are welcome. Please submit pull requests to the main repository.
235
235
236
236
<summary>Version History</summary>
237
237
238
+
<details>
239
+
<summary>v1.0.0-beta (2026-03-21)</summary>
240
+
241
+
**LLM AI Refactoring & Bug Fixes**
242
+
243
+
**LLM Explanation Panel Refactoring:**
244
+
- Redesigned AI explanation panel with premium black/white minimalist theme
245
+
- Added unavailable state UI with helpful messages when LLM not configured
246
+
- Added fullscreen modal for detailed reading
247
+
- Improved loading states and error handling
248
+
- Auto-retry on failure (up to 2 times)
249
+
250
+
**Backend LLM Service Refactoring:**
251
+
- Improved prompt templates with clearer JSON output format requirements
252
+
- Multi-strategy JSON parsing with fallback mechanisms
253
+
- Thread-safe caching with TTL for explanation caching
254
+
- Case-insensitive model name matching (codeLlama:7b vs codellama:7b)
255
+
- Separated error handling for availability check and model listing
256
+
- Added shorter timeouts to avoid UI hanging
257
+
258
+
**Frontend LLM Integration Improvements:**
259
+
- Added custom event system (`llmConfigChanged`) for cross-component communication
260
+
- Fixed React Hooks order issue (useMemo before early return)
261
+
- Fixed incorrect default status value (`'ready'` → `'unavailable'`)
262
+
- Improved SSE parsing with type annotations
263
+
- Better error feedback and loading states
264
+
265
+
**Files Added:**
266
+
-`frontend/src/components/LLMExplanationPanel.js` - AI explanation panel component
0 commit comments