╔══════════════════════════════════════════╗ ║ ACCESS CONTROL — local-llm-playground ║ ╚══════════════════════════════════════════╝
> on-prem AI — no internet, no admin rights, no data leaves your machine.
Seven self-contained examples that run entirely in your browser (via WebGPU) or against a local Ollama server. Start with a simple chat and work your way up to OCR-powered structured extraction, multi-model comparison, and prompt engineering.
Chrome 113+ or Edge 113+ — pre-installed on most machines, no install needed..task model files — see 01_chat for Kaggle links. Both files are also pre-loaded on the hackathon machines.ollama pull gemma3:1b in a terminal to fetch a model.06_ollama.html or 07_prompt.html — connects to localhost:11434 automatically.basic_chat_
Type a message and get a response from Gemma 3 270M running entirely in your browser.
text_classifier
Define your own categories, paste any text, and let the model assign it to the right bucket.
data_extractor
Paste unstructured text (receipts, emails, contacts) and get clean structured JSON out.
model_compare
Run the same prompt on Gemma 3 270M and 1B side by side and compare quality and speed.
ocr_extract
Drop in a screenshot of a form, invoice or ID card — OCR + LLM returns typed fields.
multi_model_compare
Send the same prompt to several local Ollama models in parallel and diff outputs side‑by‑side.
prompt_lab
Same message, three system prompts side by side. See how framing changes everything — the fastest skill to learn in AI.