--- # Required metadata license: pddl language: - en pretty_name: CourtListener Legal Dataset size_categories: - 10M= 100: break ``` ### Cache Results Locally ```python from datasets import load_dataset # Download once, cache forever ds = load_dataset( "drengskapur/courtlistener", "courts", # Small table - safe to download split="train", cache_dir="./hf_cache" ) # For large tables, use DuckDB with local caching import duckdb conn = duckdb.connect("courtlistener_cache.db") conn.execute(""" CREATE TABLE IF NOT EXISTS scotus_cases AS SELECT * FROM 'hf://datasets/drengskapur/courtlistener/data/opinion-clusters/*.parquet' WHERE court_id = 'scotus' """) ``` ### Table Size Guide | Config | Rows | Recommended Method | |--------|------|-------------------| | `courts` | ~700 | `load_dataset()` - safe to download | | `people-db-*` | ~16K-30K | `load_dataset()` - safe to download | | `citations` | ~18M | DuckDB or streaming | | `opinion-clusters` | ~73M | DuckDB or streaming | | `dockets` | ~70M | DuckDB or streaming | | `opinions` | ~9M | DuckDB (large text) or streaming | | `citation-map` | ~76M | DuckDB only | ## Available Configurations ### Core Legal Data | Config | Description | Rows | Size | |--------|-------------|------|------| | `opinion-clusters` | Case metadata, summaries, citation counts | ~73M | ~2.5GB | | `opinions` | Full opinion text (plain, HTML, XML) | ~9M | ~54GB | | `courts` | Court metadata (700+ courts) | ~700 | ~100KB | | `dockets` | RECAP docket metadata | ~70M | ~5GB | | `citations` | Citation references to reporters | ~18M | ~1GB | | `citation-map` | Citation graph edges | ~76M | ~500MB | | `parentheticals` | Court-written case summaries | ~6.5M | ~300MB | ### People & Financial Data | Config | Description | Rows | |--------|-------------|------| | `people-db-people` | Judge biographical information | ~16K | | `people-db-positions` | Judge positions and appointments | ~30K | | `people-db-schools` | Law school information | ~1K | | `financial-disclosures` | Judge financial disclosure reports | ~1.7M | ### Additional Tables | Config | Description | Rows | |--------|-------------|------| | `oral-arguments` | Oral argument audio metadata | ~200K | | `fjc-integrated-database` | FJC federal case data | ~10M | ### Embeddings | Config | Description | Model | |--------|-------------|-------| | `embeddings-opinion-clusters` | Case metadata embeddings | BGE-large-en-v1.5 | | `embeddings-opinions` | Opinion text embeddings | BGE-large-en-v1.5 | ## API Access (No Downloads Required) Query directly via HuggingFace Datasets Server API: ```bash # Get rows curl "https://datasets-server.huggingface.co/rows?dataset=drengskapur/courtlistener&config=courts&split=train&length=10" # Search (full-text) curl "https://datasets-server.huggingface.co/search?dataset=drengskapur/courtlistener&config=opinion-clusters&split=train&query=qualified%20immunity" # Filter (SQL-like WHERE) curl "https://datasets-server.huggingface.co/filter?dataset=drengskapur/courtlistener&config=courts&split=train&where=jurisdiction='F'" ``` ### Python Client with Rate Limit Handling ```python import time import httpx class CourtListenerAPI: BASE = "https://datasets-server.huggingface.co" def __init__(self, max_retries=5): self.client = httpx.Client(timeout=30) self.max_retries = max_retries def _request(self, endpoint: str, params: dict): for attempt in range(self.max_retries): resp = self.client.get(f"{self.BASE}/{endpoint}", params=params) if resp.status_code == 200: return resp.json() if resp.status_code == 429: # Rate limited wait = 2 ** attempt print(f"Rate limited, waiting {wait}s...") time.sleep(wait) continue resp.raise_for_status() raise Exception("Max retries exceeded") def search(self, query: str, config: str = "opinion-clusters", length: int = 10): return self._request("search", { "dataset": "drengskapur/courtlistener", "config": config, "split": "train", "query": query, "length": length }) def filter(self, where: str, config: str = "courts", length: int = 100): return self._request("filter", { "dataset": "drengskapur/courtlistener", "config": config, "split": "train", "where": where, "length": length }) # Usage api = CourtListenerAPI() results = api.search("miranda rights", length=20) federal_courts = api.filter("jurisdiction='F'", config="courts") ``` ## DuckDB Access Query directly from DuckDB without downloading: ```sql -- Install and load httpfs extension INSTALL httpfs; LOAD httpfs; -- Query Supreme Court cases SELECT case_name, date_filed, citation_count FROM 'hf://datasets/drengskapur/courtlistener/data/opinion-clusters/*.parquet' WHERE court_id = 'scotus' ORDER BY citation_count DESC LIMIT 10; -- Find all opinions citing a specific case SELECT o.id, o.plain_text FROM 'hf://datasets/drengskapur/courtlistener/data/opinions/*.parquet' o JOIN 'hf://datasets/drengskapur/courtlistener/data/citation-map/*.parquet' cm ON o.id = cm.citing_opinion_id WHERE cm.cited_opinion_id = 12345; ``` ## Python Examples ### Semantic Search with Embeddings ```python from datasets import load_dataset import numpy as np # Load embeddings embeddings_ds = load_dataset( "drengskapur/courtlistener", "embeddings-opinion-clusters", split="train" ) # Load metadata clusters = load_dataset( "drengskapur/courtlistener", "opinion-clusters", split="train" ) # Create a simple search index def cosine_similarity(a, b): return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)) # Query embedding (get from your embedding model) query_embedding = get_embedding("qualified immunity police") # Find top matches scores = [ (i, cosine_similarity(query_embedding, row["embedding"])) for i, row in enumerate(embeddings_ds) ] top_indices = sorted(scores, key=lambda x: -x[1])[:10] for idx, score in top_indices: print(f"{clusters[idx]['case_name']}: {score:.3f}") ``` ### Filter by Court and Date ```python from datasets import load_dataset ds = load_dataset("drengskapur/courtlistener", "opinion-clusters", split="train") # Filter to Supreme Court cases from 2020 scotus_2020 = ds.filter( lambda x: x["court_id"] == "scotus" and x["date_filed"] and x["date_filed"].startswith("2020") ) for case in scotus_2020: print(f"{case['case_name']} ({case['date_filed']})") ``` ### Join Citations with Opinion Text ```python from datasets import load_dataset # Load citation map and opinions citations = load_dataset("drengskapur/courtlistener", "citation-map", split="train") opinions = load_dataset("drengskapur/courtlistener", "opinions", split="train") # Build opinion lookup (for small-scale use) opinion_lookup = {op["id"]: op for op in opinions.take(10000)} # Find what cites a specific opinion target_opinion_id = 12345 citing = [c for c in citations if c["cited_opinion_id"] == target_opinion_id] print(f"Found {len(citing)} citations") ``` ## Schema Details ### Opinion Clusters Key fields for case research: - `id` - Unique identifier - `case_name`, `case_name_full` - Case names - `date_filed` - Filing date - `court_id` - Court identifier (join with `courts`) - `citation_count` - Number of times cited - `precedential_status` - Published, Unpublished, etc. - `syllabus` - Case summary - `attorneys` - Attorney information - `judges` - Judges on the panel ### Opinions Key fields for full text: - `id` - Unique identifier - `cluster_id` - Links to opinion-clusters - `plain_text` - Plain text version (best for NLP) - `html` - HTML version with formatting - `type` - Opinion type (majority, dissent, concurrence) - `author_id` - Judge who wrote the opinion ## Data Source This dataset mirrors [CourtListener](https://www.courtlistener.com/) bulk data from [Free Law Project](https://free.law/). - **Update Frequency**: Daily (source data) - **Source**: CourtListener S3 bulk data exports - **Format**: Parquet with zstd compression ## License Public Domain Dedication and License (PDDL) - same as CourtListener source data. ## Citation ```bibtex @misc{courtlistener, author = {Free Law Project}, title = {CourtListener}, year = {2024}, publisher = {Free Law Project}, howpublished = {\url{https://www.courtlistener.com/}}, } ```