Adding an AI Chat Assistant to My Portfolio Site

March 5, 2026

ai-tools · react · typescript · nextjs · vercel-ai-sdk

TL;DR: I added an AI chat assistant to my portfolio that can answer questions about my work, match job descriptions against my experience, and even help recruiters schedule calls. It started as a basic ⌘K command palette, went through multiple redesigns, and landed on a resizable sliding panel powered by Claude Haiku. Here's what I learned along the way.


Why an AI Assistant on a Portfolio?

I kept noticing the same pattern: someone visits my site, skims the homepage, maybe clicks into a project, then leaves. Recruiters in particular have limited time — they're scanning dozens of portfolios and want fast answers to specific questions. "Does this person know Kubernetes?" "What did they actually build at their last job?"

A chatbot that knows all my content felt like the right solve. Not a generic FAQ, but something that could surface relevant projects, match against job descriptions, and actually be helpful in under 30 seconds.

The Evolution: Three Attempts at the Right UX

Attempt 1: The Command Palette

My first instinct was a ⌘K command palette — a familiar pattern from VS Code, Linear, and basically every developer tool. I built it with cmdk (the library behind Vercel's command menu) and gave it two modes: navigation search and AI chat.

The navigation mode was dead weight. My site has four pages. Nobody needs fuzzy search for that. I ripped it out and went AI-only.

Attempt 2: The Modal

The AI chat as a centered modal looked fine but felt wrong in practice. When you opened it, the whole site disappeared behind a backdrop. You couldn't reference the content you were looking at while chatting about it. And on mobile, the modal took over the screen with no way to peek at the portfolio underneath.

Attempt 3: The Sliding Panel

This is where it clicked. A right-side panel that slides in, pushes the main content over, and lets you chat while browsing. Think of how Figma's inspector panel works — it's beside your content, not blocking it.

The key insight was using custom DOM events to coordinate the layout:

// command-palette.tsx — notify main content when panel opens
window.dispatchEvent(
  new CustomEvent('ai-panel-toggle', { detail: { open, width } })
)
// main-content.tsx — shift content in response
const handlePanelToggle = (e: CustomEvent<{ open: boolean; width: number }>) => {
  container.style.marginRight = e.detail.open ? `${e.detail.width}px` : '0px'
  container.style.transition = 'margin-right 300ms ease-out'
}

No React context, no state management library — just two components communicating through the DOM. Simple and decoupled.

Switching from Gemini to Claude

I started with Google Gemini 2.0 Flash via the Vercel AI SDK. It worked, but the responses felt generic — like a chatbot that happened to know some facts about me, not one that represented me well.

Switching to Claude Haiku 4.5 through Concentrate.ai (an OpenAI-compatible proxy) was a one-line model swap thanks to the AI SDK's provider abstraction:

// Before
import { google } from '@ai-sdk/google'
model: google('gemini-2.0-flash')

// After
import { createOpenAICompatible } from '@ai-sdk/openai-compatible'
const concentrate = createOpenAICompatible({
  name: 'concentrate',
  baseURL: 'https://api.concentrate.ai/v1',
})
model: concentrate('claude-haiku-4-5')

Claude's responses immediately felt more natural and better at synthesizing information across different parts of my portfolio.

The Hard Parts

Making the Panel Resizable

I wanted the panel to be resizable by dragging its left edge — standard stuff, but the details matter. The trick is using setPointerCapture so the drag keeps tracking even when your cursor moves away from the thin resize handle:

// Capture pointer for smooth tracking even outside element
(e.target as HTMLElement).setPointerCapture(e.pointerId)

During the drag, I update the DOM directly via refs instead of triggering React re-renders. The panel width updates at 60fps with zero jank:

// Direct DOM update — no React re-render
panelWidthRef.current = newWidth
panel.style.width = `${newWidth}px`

I also disable CSS transitions during drag (so the panel doesn't "lag" behind the cursor) and re-enable them on pointer up.

Job Description Detection

One of the more useful features: if someone pastes a job description into the chat, the assistant automatically switches into "advocate mode" — it maps my experience against the requirements and makes a case for why I'd be a good fit.

The detection is intentionally simple. I check for common job posting patterns (phrases like "years of experience", "requirements", "qualifications") and trigger when at least two match:

function isJobDescription(text: string): boolean {
  const jobIndicators = [
    /responsibilities|requirements|qualifications/i,
    /years? of experience/i,
    /we are looking for/i,
    /job description|position|role:/i,
  ]
  const matchCount = jobIndicators.filter((p) => p.test(text)).length
  return matchCount >= 2 || (text.length > 300 && matchCount >= 1)
}

When detected, the system prompt gets extended with instructions to structure the response as a skills match analysis. It's basically prompt engineering triggered by regex — crude but effective.

Contextual Follow-Up Suggestions

After each response, the chat shows 2-3 follow-up suggestions based on what was just discussed. If the conversation touched on projects, it suggests project-related questions. If contact info came up, it surfaces scheduling links.

This works by scanning the last assistant and user messages for topic keywords and mapping them to pre-defined suggestion categories. It's not ML — it's a switch statement with regex. And it works surprisingly well for guiding conversations.

What I'd Do Differently

  • Stream rendering from the start. I added Streamdown for markdown rendering mid-way through. Should have been there from day one — watching raw markdown tokens appear and then snap into formatted text is jarring.
  • Server-side content loading. Right now the chat route imports all JSON content at the module level. If my content grows significantly, I'd move to a proper retrieval step instead of dumping everything into the system prompt.
  • Better mobile UX. The panel goes full-screen on mobile (via max-sm:!w-full), which works but isn't ideal. A bottom sheet pattern would feel more native.

Key Takeaways

  1. Side panels beat modals for chat. Users need to reference the content they're chatting about. Don't hide it behind a backdrop.
  2. Custom DOM events are underrated for decoupled component communication. Not everything needs a state management library.
  3. The Vercel AI SDK's provider abstraction is genuinely useful. Switching LLM providers was a 3-line change because the streaming, message format, and UI hooks are all provider-agnostic.
  4. Simple heuristics can drive smart UX. Job description detection via regex and contextual suggestions via keyword matching — neither is sophisticated, but both make the chat feel much more intelligent.
  5. Direct DOM manipulation via refs still has a place in React. For high-frequency updates like panel resizing, bypassing the render cycle is the right call.

The AI assistant is live at sachinadlakha.us — try pasting a job description into it and see what happens.