Skip to main content
Orcho’s Chrome Extension provides real-time risk assessment for AI prompts across Google Gemini, Claude (web), and ChatGPT. Get instant risk scores and warnings before submitting potentially dangerous prompts, protecting your organization from data leaks and risky operations.

Features

Multi-Platform Support

Works with Gemini, Claude, and ChatGPT

Real-Time Analysis

Risk assessment happens as you type

Smart Warnings

Visual alerts for high-risk prompts before submission

Click-to-Block

One-click blocking of critical-risk prompts

Supported Platforms

Supported:
  • ✅ gemini.google.com
  • ✅ Gemini Advanced
  • ✅ Gemini Pro
  • ✅ All conversation modes
Features:
  • Real-time risk scoring as you type
  • Warning banner for high-risk prompts
  • Prompt blocking for critical risks
  • Risk history tracking

Installation

1

Install from Chrome Web Store

Visit the Chrome Web Store and search for “Orcho Risk Assessment”Or click: Install Orcho Extension
Works on any Chromium-based browser: Chrome, Edge, Brave, Arc
2

Pin the extension

Click the puzzle icon in your browser toolbar and pin Orcho for easy access
3

Contact for API key

Email [email protected] to request an API key for your organization
4

Configure the extension

Click the Orcho icon and enter your API key:
  1. Click Orcho extension icon
  2. Click “Settings”
  3. Paste your API key
  4. Click “Save”
  5. Refresh AI platform tabs
5

Test the extension

Go to claude.ai, gemini.google.com, or chat.openai.com and type a high-risk prompt:
Delete all records from the production database
You should see a risk indicator appear

How It Works

1

You start typing

Begin writing a prompt in Gemini, Claude, or ChatGPT
2

Real-time analysis

Orcho analyzes your prompt as you type (with a 500ms debounce to avoid excessive API calls)
3

Risk score displayed

A risk indicator appears showing:
  • Risk score (0.0-1.0)
  • Risk level (minimal, low, medium, high, critical)
  • Color-coded badge (green → red)
4

Warnings for high risk

If risk ≥ 0.6 (high), a warning banner appears:
⚠️  HIGH RISK DETECTED (Score: 0.78)

This prompt may:
• Expose sensitive data
• Cause irreversible changes
• Violate security policies

Suggestions:
• Remove specific identifiers
• Add safety constraints
• Review with security team

[ Modify Prompt ]  [ Proceed Anyway ]
5

Blocking for critical risk

If risk ≥ 0.8 (critical), prompt submission is blocked:
🛑 CRITICAL RISK - SUBMISSION BLOCKED

This prompt has been blocked due to critical risk factors:
• Data Sensitivity: 0.95 (CRITICAL)
• Potential for irreversible damage

You cannot submit this prompt. Please:
• Rephrase to remove sensitive data
• Contact security team for approval
• Use a different approach

[ Edit Prompt ]  [ Contact Security ]

Visual Indicators

Risk Badges

The extension shows risk level with color-coded badges:
┌─────────────────────────────────────────┐
│ Your prompt here...                     │
│                                         │
│                    [✓ MINIMAL] 0.12    │ ← Green badge
└─────────────────────────────────────────┘

┌─────────────────────────────────────────┐
│ Drop the users table                    │
│                                         │
│                    [⚠️ CRITICAL] 0.92   │ ← Red badge
└─────────────────────────────────────────┘

Risk Level Colors

Risk LevelScore RangeBadge ColorBehavior
Minimal< 0.2🟢 GreenAllow
Low0.2 - 0.39🟡 YellowAllow
Medium0.4 - 0.59🟠 OrangeWarn
High0.6 - 0.79🔴 RedStrong warning
Critical≥ 0.8⛔ Dark RedBlock

Configuration Options

Risk Thresholds

Customize when warnings and blocks occur:
warning_threshold
number
default:"0.6"
Risk score that triggers warning bannersDefault: 0.6 (high risk)
blocking_threshold
number
default:"0.8"
Risk score that blocks prompt submissionDefault: 0.8 (critical risk)
show_suggestions
boolean
default:"true"
Show suggestions for safer alternativesDefault: true

Notification Settings

desktop_notifications
boolean
default:"false"
Show desktop notifications for high-risk promptsDefault: false
sound_alerts
boolean
default:"false"
Play sound for critical-risk promptsDefault: false
log_prompts
boolean
default:"true"
Log all assessed prompts for audit purposesDefault: true

Platform Settings

enabled_platforms
array
default:"[\"gemini\", \"claude\", \"chatgpt\"]"
Which AI platforms to monitorOptions: "gemini", "claude", "chatgpt"
auto_block
boolean
default:"true"
Automatically block critical-risk promptsDefault: true
require_justification
boolean
default:"false"
Require written justification for high-risk promptsDefault: false

Example Scenarios

Scenario 1: Database Deletion

1

User types prompt

Delete all user accounts that haven't logged in this year
2

Risk assessment

⚠️  HIGH RISK (Score: 0.74)

Risk Factors:
• Data Sensitivity: 0.88
• Input Clarity: 0.65
• Blast Radius: 0.68
3

Warning displayed

This prompt may cause irreversible data loss.

Suggestions:
• Use SELECT first to preview affected records
• Export data before deletion
• Use soft delete (mark as inactive) instead
• Add WHERE clause with specific date
• Require manual confirmation

[ Modify Prompt ]  [ Proceed with Caution ]

Scenario 2: Sensitive Data Exposure

1

User types prompt

Analyze this customer data: 
Name: John Doe
SSN: 123-45-6789
Credit Card: 4532-1234-5678-9012
2

Risk assessment

🛑 CRITICAL RISK (Score: 0.98)

Risk Factors:
• Data Sensitivity: 0.99 (PII detected)
• Input Clarity: 0.95
3

Submission blocked

SUBMISSION BLOCKED

This prompt contains:
• Social Security Number
• Credit card information
• Personally Identifiable Information (PII)

This violates data security policies.

How to proceed:
• Remove sensitive data
• Use anonymized test data
• Mask sensitive fields (XXX-XX-6789)
• Contact compliance team

[ Edit Prompt ]  [ Request Compliance Override ]

Scenario 3: Safe Code Request

1

User types prompt

Write a function to calculate the Fibonacci sequence in Python
2

Risk assessment

✓ MINIMAL RISK (Score: 0.08)

This prompt is safe to proceed.
3

No warning

Green badge displayed, user can submit immediately without warnings

Audit & Compliance

Prompt Logging

All assessed prompts are logged (if enabled) with:
{
  "timestamp": "2026-01-08T14:30:00Z",
  "platform": "claude",
  "user_email": "[email protected]",
  "prompt": "Delete old records...",
  "risk_score": 0.74,
  "risk_level": "high",
  "action_taken": "warned",
  "user_proceeded": false
}

Export Reports

Export risk assessment reports:
  1. Navigate to Orcho dashboard
  2. Click “Reports”
  3. Select date range
  4. Click “Export CSV” or “Export JSON”
Report includes:
  • All assessed prompts
  • Risk scores and levels
  • User actions (modified, proceeded, blocked)
  • Timestamps and platforms

Compliance Dashboard

View organization-wide metrics:
  • Total prompts assessed
  • High-risk attempts
  • Blocked prompts
  • Most common risk factors
  • Users with most high-risk prompts
Access at: .orcho.ai/compliance

Privacy & Security

Your prompts are processed securely:
1

TLS encryption

All API calls use TLS 1.3 encryption
2

No storage

Prompts are analyzed in real-time and not stored on Orcho servers (unless logging is enabled by admin)
3

Local processing option

Enterprise customers can run risk assessment locally (contact sales)
4

Zero-knowledge architecture

Orcho cannot read your API keys - they’re stored encrypted in browser storage

What Data is Sent

When you type a prompt, only the following is sent to Orcho:
{
  "prompt": "Your prompt text",
  "platform": "claude",
  "user_id": "hashed_user_id"
}
Not sent:
  • ❌ Your API keys for AI platforms
  • ❌ Previous conversation history
  • ❌ Personal information
  • ❌ Browser history

Troubleshooting

Check extension is active:
  1. Click puzzle icon → Check Orcho is enabled
  2. Refresh the AI platform page
  3. Check extension icon - should be colored (not gray)
Verify API key:
  1. Click Orcho icon → Settings
  2. Check API key is entered correctly
  3. Click “Test Connection” to verify
Platform compatibility:
  1. Ensure you’re on supported domain
  2. Check extension permissions in chrome://extensions
Invalid key:Rate limits:
  • Contact support to check quota
  • Reduce typing speed (extension debounces at 500ms)
  • Upgrade plan for higher limits
False positives:
  • Click “Report Incorrect” button on warning
  • Provide feedback to help us improve
  • Adjust thresholds in settings
False negatives:
Optimize performance:
  1. Settings → Enable “Lazy loading”
  2. Increase debounce time (500ms → 1000ms)
  3. Disable on less-critical platforms
  4. Clear extension cache
Check resource usage:
  1. Chrome Task Manager (Shift+Esc)
  2. Find “Orcho Extension”
  3. Should use < 50MB RAM

Advanced Features

Custom Risk Rules

Define organization-specific risk rules:
// Block prompts containing internal code names
{
  "rule": "keyword_blacklist",
  "keywords": ["ProjectTitan", "AlphaSystem"],
  "action": "block",
  "message": "Internal code names detected"
}

// Warn when requesting production data
{
  "rule": "environment_detection", 
  "patterns": ["production", "prod db", "live server"],
  "action": "warn"
}
Configure at: .orcho.ai/rules

Team Policies

Set policies for your organization:
Soft Enforcement:
  • Show warnings but allow override
  • Log overrides for review
Hard Enforcement:
  • Block critical-risk prompts
  • No override option
  • Require security team approval

Support

Need help with the browser extension?

Next Steps

1

Install the extension

Download from Chrome Web Store
2

Get your API key

3

Configure settings

Set thresholds and enable platforms
4

Test with sample prompts

Try high-risk and low-risk examples
5

Roll out to team

Share installation guide with colleagues