Frontend Features
Complete guide to ExtendedLM's user interface and frontend capabilities
Overview
ExtendedLM features a modern, responsive frontend built with Next.js 15.3.1, React 19.1, and TypeScript 5.8. The interface provides an intuitive chat experience with powerful right-panel features, customizable settings, and real-time updates via Server-Sent Events (SSE).
- Framework: Next.js 15.3.1 with App Router
- UI Library: React 19.1 with TypeScript 5.8
- State Management: Zustand for global state
- Styling: Tailwind CSS with custom design system
- Real-time: Server-Sent Events (SSE) for streaming
- Authentication: Supabase Auth with social providers
Chat Interface
The main chat interface is the core of ExtendedLM, providing a powerful and intuitive conversation experience with AI agents.
Message Types
User Messages
Standard user input with support for:
- Text input: Multi-line text with Markdown support
- Image attachments: Upload and analyze images (vision models)
- File attachments: Attach documents for RAG processing
- Voice input: Speech-to-text with Whisper API
- Screen capture: Capture and attach screenshots
Assistant Messages
AI responses with rich formatting:
- Markdown rendering: Full GitHub-flavored Markdown
- Code highlighting: Syntax highlighting for 100+ languages
- Streaming responses: Real-time token-by-token display
- Tool calls: Display of tool invocations and results
- Artifacts: Interactive code, charts, and visualizations
System Messages
Informational messages about:
- Agent switches and handoffs
- Tool execution status
- RAG document retrieval
- Error messages and warnings
Input Features
// Message input component location
// File: app/components/Chat/MessageInput.tsx
interface MessageInputProps {
conversationId: string;
onSend: (message: string, attachments?: File[]) => void;
streaming: boolean;
}
// Features:
// - Auto-resize textarea
// - File drag-and-drop
// - Keyboard shortcuts (Enter to send, Shift+Enter for newline)
// - Voice input toggle
// - Screen capture button
// - Attachment preview
// - Character/token counter
Message Actions
Each message includes action buttons:
- Copy: Copy message text to clipboard
- Edit: Edit user messages (regenerates response)
- Regenerate: Re-generate assistant response
- Branch: Create conversation branch from this point
- Speak: Text-to-speech playback
- Translate: Quick translation to target language
Code Block Features
Code blocks include interactive features:
- Syntax highlighting: Powered by Prism.js
- Language detection: Auto-detect or specify language
- Copy button: One-click copy to clipboard
- Line numbers: Optional line numbering
- Execute button: Run code in sandbox (Python, JavaScript, etc.)
- Download: Save code as file
Right Panel Features
The right panel provides contextual tools and features accessible during conversations.
Artifacts
Interactive previews for generated content:
Code Artifacts
- Live Preview: Real-time preview of HTML/CSS/JS
- React Components: Preview React components with hot reload
- Mermaid Diagrams: Render flowcharts, sequence diagrams, etc.
- SVG Graphics: Interactive SVG preview
Data Artifacts
- Tables: Interactive data tables with sorting/filtering
- Charts: Line, bar, pie charts with Chart.js
- JSON Viewer: Collapsible JSON tree view
- CSV Editor: Edit and download CSV data
// Artifact component structure
// File: app/components/Artifacts/ArtifactRenderer.tsx
interface Artifact {
id: string;
type: 'code' | 'html' | 'react' | 'mermaid' | 'chart' | 'table';
title: string;
content: string;
language?: string;
metadata?: Record;
}
// Supported artifact types:
const artifactRenderers = {
code: CodePreview,
html: HTMLPreview,
react: ReactPreview,
mermaid: MermaidDiagram,
chart: ChartPreview,
table: DataTable,
svg: SVGPreview,
json: JSONViewer,
};
Canvas Mode
Full-screen artifact editing with split view:
- Split Editor: Code editor + live preview side-by-side
- Monaco Editor: VS Code-like editing experience
- Hot Reload: Instant preview updates
- Console Output: View console logs and errors
- Export: Download as standalone HTML/React app
Document Viewer
View and interact with attached documents:
- PDF Viewer: Embedded PDF viewer with page navigation
- Image Gallery: Lightbox for image attachments
- Text Preview: Syntax-highlighted text files
- RAG Highlights: Show retrieved chunks highlighted in context
Knowledge Graph Viewer
Visualize knowledge graph from GraphRAG:
- Interactive Graph: D3.js force-directed graph
- Entity Details: Click nodes to view entity information
- Relationship Explorer: Navigate entity relationships
- Community Clusters: Visual community detection results
- Export: Export graph as JSON or image
Settings Settings Settings & Customization
Customization
Customization
Settings & Customization
CustomizationExtendedLM provides extensive customization options through the Settings panel.
General Settings
Appearance
- Theme: Light, Dark, or System (auto-switch)
- Color Scheme: Multiple preset themes (Blue, Purple, Green, etc.)
- Font Size: Adjustable text size (Small, Medium, Large, X-Large)
- Font Family: Choose from multiple monospace/sans-serif fonts
- Message Density: Compact or Comfortable spacing
- Code Theme: Syntax highlighting theme (VS Dark, Monokai, GitHub, etc.)
Behavior
- Auto-scroll: Automatically scroll to new messages
- Enter to Send: Send message on Enter (vs. Shift+Enter)
- Show Timestamps: Display message timestamps
- Show Token Count: Display token usage
- Streaming: Enable/disable streaming responses
- Sound Effects: Enable notification sounds
Model Settings
// Model configuration interface
// File: app/types/settings.ts
interface ModelSettings {
// Default model selection
defaultModel: string;
// Model parameters
temperature: number; // 0.0 - 2.0
maxTokens: number; // Max response length
topP: number; // Nucleus sampling (0.0 - 1.0)
frequencyPenalty: number; // -2.0 - 2.0
presencePenalty: number; // -2.0 - 2.0
// Advanced options
seed?: number; // Deterministic sampling
stopSequences: string[]; // Custom stop sequences
logitBias?: Record;
// Agent-specific settings
agentSettings: {
[agentType: string]: {
model: string;
temperature: number;
// ... agent-specific params
};
};
}
Available Parameters
- Temperature: Control randomness (0 = deterministic, 2 = very creative)
- Max Tokens: Maximum response length (128 - 128000)
- Top P: Nucleus sampling threshold
- Frequency Penalty: Reduce repetition of tokens
- Presence Penalty: Encourage topic diversity
- Stop Sequences: Custom sequences to end generation
- Seed: Deterministic sampling for reproducibility
Agent Settings
Configure default agents and their behavior:
- Default Agent: Select starting agent (Standard, RAG, Translation, etc.)
- Auto-select Agent: Automatically choose appropriate agent based on query
- Agent Handoff: Allow agents to transfer to specialized agents
- RAG Agent Settings: Configure retrieval parameters (k, threshold, reranking)
- Computer Use Settings: Configure Mate connection and permissions
- MCP Settings: Enable/disable MCP servers and tools
RAG Settings
Fine-tune Retrieval-Augmented Generation:
- Retrieval Count (k): Number of chunks to retrieve (1-20)
- Similarity Threshold: Minimum similarity score (0.0-1.0)
- Reranking: Enable cross-encoder reranking
- Hybrid Search: Combine vector + keyword search
- Knowledge Graph: Enable GraphRAG entity search
- RAPTOR: Enable hierarchical chunk retrieval
- Cache Results: Cache retrieval results
Workflow Settings
- Auto-save: Automatically save workflow changes
- Grid Snap: Snap nodes to grid
- Show Minimap: Display workflow minimap
- Execution Timeout: Maximum workflow execution time
- Concurrent Nodes: Max parallel node execution
Privacy & Security
- Data Retention: How long to keep conversations
- Analytics: Enable/disable usage analytics
- Share Conversations: Allow conversation sharing
- API Key Visibility: Mask API keys in UI
- Export Data: Download all user data
- Delete Account: Permanently delete account and data
User Customization
ExtendedLM allows extensive personalization through custom instructions and profiles.
Custom Instructions
Define system-level instructions that apply to all conversations:
// Custom instructions interface
// File: app/types/user.ts
interface CustomInstructions {
// User information (helps AI understand context)
aboutUser: string;
// Response preferences
responseStyle: string;
// Examples:
// aboutUser: "I'm a Python developer working on data science projects.
// I prefer detailed technical explanations."
//
// responseStyle: "Always provide code examples with explanations.
// Use Python for examples. Include type hints."
}
User Profiles
Create multiple profiles for different use cases:
- Work Profile: Technical, formal responses with code examples
- Personal Profile: Casual, creative responses
- Learning Profile: Detailed explanations with step-by-step guidance
- Quick Profile: Concise, to-the-point responses
Saved Prompts
Save frequently used prompts as templates:
- Categories: Organize prompts by category (Code Review, Writing, Research, etc.)
- Variables: Use placeholders like {topic}, {language}, {style}
- Quick Access: Slash commands for quick prompt insertion (/review, /explain, etc.)
- Share Prompts: Export and share prompt templates
Keyboard Shortcuts
Extensive keyboard shortcuts for power users:
Ctrl/Cmd + K- Quick command paletteCtrl/Cmd + N- New conversationCtrl/Cmd + ,- Open settingsCtrl/Cmd + /- Toggle sidebarCtrl/Cmd + \- Toggle right panel
Enter- Send messageShift + Enter- New lineCtrl/Cmd + ↑- Edit last messageCtrl/Cmd + R- Regenerate responseEsc- Stop streaming
Ctrl/Cmd + S- Save artifactCtrl/Cmd + Enter- Run codeCtrl/Cmd + Shift + F- Format code
Custom CSS
Advanced users can inject custom CSS for complete UI customization:
/* Example custom CSS */
/* File: Settings → Advanced → Custom CSS */
/* Change message bubble colors */
.message-user {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
}
/* Customize code blocks */
.code-block {
border-radius: 12px;
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
}
/* Adjust sidebar width */
.docs-sidebar {
width: 280px;
}
/* Custom scrollbar */
::-webkit-scrollbar {
width: 8px;
}
::-webkit-scrollbar-thumb {
background: var(--brand-primary);
border-radius: 4px;
}
Real-time Features
ExtendedLM uses Server-Sent Events (SSE) for real-time updates without polling.
Streaming Responses
Token-by-token streaming for immediate feedback:
// SSE client implementation
// File: app/lib/sse-client.ts
export async function streamCompletion(
conversationId: string,
message: string,
onToken: (token: string) => void,
onComplete: (response: string) => void,
onError: (error: Error) => void
) {
const eventSource = new EventSource(
`/api/chat/stream?conversation_id=${conversationId}`
);
let fullResponse = '';
eventSource.addEventListener('token', (event) => {
const token = event.data;
fullResponse += token;
onToken(token);
});
eventSource.addEventListener('done', () => {
onComplete(fullResponse);
eventSource.close();
});
eventSource.addEventListener('error', (event) => {
onError(new Error(event.data));
eventSource.close();
});
}
Live Workflow Execution
Real-time workflow execution updates:
- Node Status: Visual indication of executing nodes (pending, running, success, error)
- Progress Updates: Step-by-step progress messages
- Output Streaming: Stream node outputs as they complete
- Error Handling: Real-time error messages with retry options
Collaborative Features
Multi-user collaboration (Enterprise feature):
- Shared Conversations: Multiple users in same conversation
- Typing Indicators: See when others are typing
- Live Cursors: See collaborator positions in workflows
- Real-time Sync: Instant updates across all connected clients
Notifications
Real-time browser notifications:
- Message Notifications: Desktop notifications for new messages
- Workflow Complete: Notification when workflow finishes
- Error Alerts: Critical error notifications
- Custom Alerts: Configure custom notification triggers
Mobile Mobile Mobile & Responsive Design
Responsive Design
Responsive Design
Mobile & Responsive Design
Responsive DesignExtendedLM is fully responsive and optimized for mobile devices.
Mobile UI Adaptations
- Collapsible Sidebar: Swipe or tap to open/close
- Bottom Input: Fixed input bar at bottom for easy thumb typing
- Touch Gestures: Swipe to delete, long-press for context menu
- Optimized Artifacts: Full-screen artifact view on mobile
- Voice Input Priority: Large voice button for easier dictation
Progressive Web App (PWA)
Install ExtendedLM as a native-like app:
- Offline Support: Basic functionality without internet
- Add to Home Screen: Install on iOS/Android
- Push Notifications: Native push notifications
- Background Sync: Sync data when connection restored
Responsive Breakpoints
/* Responsive breakpoints */
/* File: app/globals.css */
/* Mobile */
@media (max-width: 640px) {
.docs-sidebar { display: none; }
.right-panel { width: 100%; }
.message-input { bottom: 0; }
}
/* Tablet */
@media (min-width: 641px) and (max-width: 1024px) {
.docs-sidebar { width: 240px; }
.right-panel { width: 50%; }
}
/* Desktop */
@media (min-width: 1025px) {
.docs-sidebar { width: 280px; }
.right-panel { width: 400px; }
}
Accessibility
ExtendedLM is built with accessibility in mind, following WCAG 2.1 AA standards.
Keyboard Navigation
- Tab Navigation: Full keyboard navigation support
- Focus Indicators: Clear visual focus states
- Skip Links: Skip to main content/navigation
- Escape to Close: ESC key closes modals/panels
Screen Reader Support
- ARIA Labels: Comprehensive ARIA attributes
- Semantic HTML: Proper heading hierarchy and landmarks
- Live Regions: Announce streaming messages to screen readers
- Alt Text: Descriptive alt text for all images
Visual Accessibility
- High Contrast Mode: WCAG AA contrast ratios
- Scalable Text: Supports browser zoom up to 200%
- Color Blind Modes: Alternative color schemes
- Reduced Motion: Respect prefers-reduced-motion
// Accessibility example
// File: app/components/Chat/Message.tsx
<div
role="article"
aria-label={`Message from ${message.role}`}
aria-live={isStreaming ? 'polite' : 'off'}
>
<div className="message-header">
<span className="sr-only">
{message.role === 'user' ? 'You said:' : 'Assistant replied:'}
</span>
{message.role}
</div>
<div className="message-content">
{message.content}
</div>
<div role="toolbar" aria-label="Message actions">
<button aria-label="Copy message">...</button>
<button aria-label="Edit message">...</button>
</div>
</div>
Performance Optimization
ExtendedLM is optimized for fast loading and smooth interactions.
Lazy Loading
- Route-based Code Splitting: Load only needed JavaScript per page
- Component Lazy Loading: Dynamic imports for heavy components
- Image Lazy Loading: Load images as they enter viewport
- Virtual Scrolling: Render only visible messages (large conversations)
Caching Strategy
// Caching configuration
// File: app/lib/cache.ts
// Static assets: Cache-Control: public, max-age=31536000, immutable
// API responses: SWR with 5-minute cache
// User data: IndexedDB for offline access
export const cacheConfig = {
// React Query cache time
staleTime: 5 * 60 * 1000, // 5 minutes
cacheTime: 10 * 60 * 1000, // 10 minutes
// IndexedDB for offline
offlineStorage: {
conversations: true,
messages: true,
artifacts: false, // Too large
},
};
Bundle Optimization
- Tree Shaking: Remove unused code
- Minification: Compress JavaScript and CSS
- Compression: Gzip/Brotli compression
- CDN Delivery: Static assets served from CDN
Runtime Optimization
- React Memoization: useMemo/useCallback for expensive operations
- Debouncing: Debounce search and auto-save
- Request Coalescing: Batch multiple API requests
- WebWorkers: Offload heavy processing (embedding, syntax highlighting)
Browser Support
ExtendedLM supports all modern browsers with the following minimum versions:
- Chrome/Edge: Version 90+ (Chromium-based)
- Firefox: Version 88+
- Safari: Version 14+ (macOS/iOS)
- Opera: Version 76+
- ES2020 JavaScript support
- CSS Grid and Flexbox
- Server-Sent Events (SSE)
- WebSocket (for Realtime API)
- IndexedDB (for offline support)
- Web Audio API (for speech features)
Feature Detection
ExtendedLM includes graceful fallbacks for unsupported features:
// Feature detection example
// File: app/lib/feature-detection.ts
export const features = {
sse: typeof EventSource !== 'undefined',
websocket: typeof WebSocket !== 'undefined',
indexedDB: typeof indexedDB !== 'undefined',
webAudio: typeof AudioContext !== 'undefined',
mediaRecorder: typeof MediaRecorder !== 'undefined',
};
// Fallback for SSE (use polling)
if (!features.sse) {
console.warn('SSE not supported, falling back to polling');
usePollingInsteadOfSSE();
}