Skip to main content

API Documentation

Complete reference for the Briefcase Legal AI API

Contents

NYC Housing Rights API

No API Key Required
100% Accuracy
Production Ready

Start immediately with our production NYC Housing Rights Assistant. This OpenAI-compatible API provides 100% accurate answers to critical NYC tenant questions — no signup required.

Try It Now - No API Key Needed

Test critical NYC tenant questions with production accuracy

bash
# Test critical tenant questions
curl -X POST https://nyc-tenant-assistant-985807032281.us-central1.run.app/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "aanshshah/nyc-housing-rights",
"messages": [
{"role": "user", "content": "What is The Legal Aid Society phone number?"}
]
}'
 
# Response: "The Legal Aid Society phone number is 212-577-3300."

100% Accuracy

  • • Legal Aid Society: 212-577-3300
  • • Max application fee: $20
  • • Security deposit return: 14 days
  • • HPD contact: 212-863-6300

Production Ready

  • • Response time: <2 seconds
  • • Uptime: 99.9% SLA
  • • Serverless auto-scaling
  • • Google Cloud Run hosted

OpenAI Compatible

  • • Drop-in OpenAI replacement
  • • Standard chat/completions API
  • • Works with existing SDKs
  • • No authentication required

Intelligence Supply Chain Methodology

This model demonstrates domain specialization using the Intelligence Supply Chain: from 18 web-scraped examples to 597 training samples (33.2x expansion), achieving 100% accuracy on critical NYC tenant questions.

Getting Started

The Briefcase Legal AI API provides OpenAI-compatible endpoints with specialized legal models and research tools. You can use it as a drop-in replacement for OpenAI's API or integrate it with the Vercel AI SDK.

Direct API Usage

Use our REST API directly with any HTTP client

  • OpenAI-compatible endpoints
  • Any programming language
  • Streaming support

Vercel AI SDK

Purpose-built provider for the Vercel AI SDK

  • Type-safe integration
  • Built-in legal tools
  • React hooks support

Quality Modes

Use the qualityMode flag to balance speed, cost, and depth. High mode automatically enables advanced verification and genealogy features when available.

Standard (default)

Set qualityMode="standard"

Balanced accuracy and latency
Consensus enabled when beneficial
Good for iterative research

High

Set qualityMode="high"

Expanded token budget
Always runs verification pipeline
Triggers Holmes Rank genealogy when requested

Fast Path

System-detected when classification deems a query simple.

Automatically selected for simple queries
Skips consensus to minimize latency
Lower cost for simple queries

Request Options

Fine-tune responses with the optional parameters below. Fields not provided fall back to safe defaults.

FieldTypeDescriptionDefault
querystringRequired. Natural language question or research task.
qualityMode"standard" | "high"Controls token budget and verification depth."standard"
includeGenealogybooleanReturn Holmes Rank genealogy graph and ranking metadata.false
enableAdvancedFeaturesbooleanForce-enable verification/genealogy (auto-on for high mode).derived
conversationHistoryArray<{ role, content }>Optional chat turns for follow-up questions.[]
jurisdictionenumBias results toward a specific jurisdiction (e.g. "state", "federal")."federal"
legalAreastringInform routing for more precise research (e.g. "employment-law").auto-detected

Authentication

All API requests require a Bearer token in the Authorization header.Request API access to get your API key.

Authentication Example

typescript
const response = await fetch('https://www.briefcasebrain.ai/api/legal-research', {
method: 'POST',
headers: {
'Authorization': 'Bearer your-api-key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
query: 'What are the key ADA accessibility precedents in the 2nd Circuit?',
qualityMode: 'standard',
includeGenealogy: false
})
})

API Endpoints

The API follows OpenAI's specification for maximum compatibility.

Legal Research

POST

/api/legal-research

Primary endpoint used by the web assistant and SDK. Returns a structured legal research packet with summaries, cases, statutes, and metadata.

Required Fields

  • query: Legal research question or prompt

Optional Fields

  • qualityMode: "standard" (default) or "high"
  • includeGenealogy: Boolean to request Holmes Rank genealogy output
  • enableAdvancedFeatures: Boolean to force-enable verification/genealogy (defaults to true when qualityMode is "high")
  • conversationHistory, jurisdiction, legalArea: Context parameters used for follow-up questions

Query Enhancement

POST

/api/enhance-query

Returns structured query rewrites, research strategies, and recommended sources to help users refine prompts before running full research.

Required Fields

  • query: Text to enhance

Authentication is optional for the web UI but required for API keys. Streaming is not available for this endpoint.

OpenAI-Compatible Bridge

POST

/api/v1/chat/completions

Compatibility endpoint for existing OpenAI integrations. Internally forwards to /api/legal-research using the standard quality mode.

  • Supports the messages format; model is accepted but currently ignored.
  • Streaming (stream: true) is supported via SSE; responses may arrive as a single chunk followed by [DONE].

Streaming Responses

The OpenAI-compatible bridge streams over Server-Sent Events, emitting multiple chunks of the assistant response followed by a final [DONE] sentinel.

OpenAI-Compatible Streaming

Consume SSE chunks from /api/v1/chat/completions

typescript
const response = await fetch('https://www.briefcasebrain.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer your-api-key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'legal-research-pro', // optional today
messages: [{ role: 'user', content: 'Summarize recent NLRA decisions' }],
stream: true
})
})
 
const reader = response.body?.getReader()
const decoder = new TextDecoder()
let buffer = ''
 
while (reader) {
const { value, done } = await reader.read()
if (done) break
 
buffer += decoder.decode(value, { stream: true })
const lines = buffer.split('
')
buffer = lines.pop() || '' // keep incomplete line in buffer
 
for (const line of lines) {
const trimmed = line.trim()
if (!trimmed) continue
if (trimmed === 'data: [DONE]') break
if (trimmed.startsWith('data: ')) {
const payload = JSON.parse(trimmed.slice(6))
const delta = payload.choices?.[0]?.delta?.content
if (delta) process.stdout.write(delta)
}
}
}

Vercel AI SDK Integration

Use our purpose-built provider package for seamless integration with the Vercel AI SDK.

SDK Installation & Usage

Install the package and start building legal AI applications

bash
npm install briefcasebrain-sdk
# or for legal-specific SDK
npm install briefcasebrain-ai-sdk-legal

Rate Limits

Rate limits are enforced per API key across rolling minute, hour, and day windows. Limits are returned in response headers.

Basic

Free

Per minute:10
Per hour:100
Per day:2,400

Professional

$49/month

Per minute:50
Per hour:1,000
Per day:24,000

Enterprise

Custom

Per minute:200
Per hour:10,000
Per day:240,000

Error Handling

The API returns standard HTTP status codes and structured error responses.

400

Bad Request

Invalid request parameters

401

Unauthorized

Invalid or missing API key

429

Rate Limited

Rate limit exceeded

500

Server Error

Internal server error

Error Handling Example

Proper error handling with status codes and messages

typescript
try {
const response = await fetch('https://www.briefcasebrain.ai/api/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer your-api-key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
model: 'legal-research-pro',
messages: [{ role: 'user', content: 'Legal question...' }]
})
})
 
if (!response.ok) {
if (response.status === 401) {
throw new Error('Invalid API key')
} else if (response.status === 429) {
const resetTime = response.headers.get('X-RateLimit-Reset')
const remaining = response.headers.get('X-RateLimit-Remaining')
throw new Error(`Rate limit exceeded. Reset at: ${resetTime}`)
} else if (response.status === 400) {
const error = await response.json()
throw new Error(`Bad request: ${error.error.message}`)
} else {
throw new Error(`API error: ${response.status}`)
}
}
 
const data = await response.json()
return data
} catch (error) {
console.error('API request failed:', error.message)
// Handle error appropriately
}

Migration from OpenAI

Migrating from OpenAI's API is straightforward - just change the base URL and API key.

Before (OpenAI)

const openai = new OpenAI({
  apiKey: process.env.AI_GATEWAY_API_KEY
})

const response = await openai.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "..." }]
})

After (Briefcase)

const briefcase = createBriefcaseLegal({
  apiKey: process.env.LEGAL_API_KEY,
  baseURL: 'https://www.briefcasebrain.ai/api/v1'
})

const { text } = await generateText({
  model: briefcase('legal-research-pro'),
  prompt: "..."
})

Alternative: OpenAI client with Briefcase

import OpenAI from 'openai'

const openai = new OpenAI({
  apiKey: process.env.BRIEFCASEBRAIN_API_KEY,
  baseURL: 'https://www.briefcasebrain.ai/api/v1'
})

const response = await openai.chat.completions.create({
  model: 'legal-research-pro',
  messages: [{ role: 'user', content: '...' }],
  stream: true
})