Search is not available for this dataset
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
id: string
date_created: string
date_modified: string
judges: string
date_filed: string
date_filed_is_approximate: string
slug: string
case_name_short: string
case_name: string
case_name_full: string
scdb_id: string
scdb_decision_direction: string
scdb_votes_majority: string
scdb_votes_minority: string
source: string
procedural_history: string
attorneys: string
nature_of_suit: string
posture: string
syllabus: string
headnotes: string
summary: string
disposition: string
history: string
other_dates: string
cross_reference: string
correction: string
citation_count: string
precedential_status: string
date_blocked: string
blocked: string
filepath_json_harvard: string
filepath_pdf_harvard: string
docket_id: string
arguments: string
headmatter: string
to
{'id': Value('int64'), 'case_name': Value('string'), 'case_name_full': Value('string'), 'date_filed': Value('string'), 'court_id': Value('string'), 'citation_count': Value('int64'), 'precedential_status': Value('string'), 'syllabus': Value('string'), 'judges': Value('string'), 'attorneys': Value('string'), 'docket_id': Value('int64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1975, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables
yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
id: string
date_created: string
date_modified: string
judges: string
date_filed: string
date_filed_is_approximate: string
slug: string
case_name_short: string
case_name: string
case_name_full: string
scdb_id: string
scdb_decision_direction: string
scdb_votes_majority: string
scdb_votes_minority: string
source: string
procedural_history: string
attorneys: string
nature_of_suit: string
posture: string
syllabus: string
headnotes: string
summary: string
disposition: string
history: string
other_dates: string
cross_reference: string
correction: string
citation_count: string
precedential_status: string
date_blocked: string
blocked: string
filepath_json_harvard: string
filepath_pdf_harvard: string
docket_id: string
arguments: string
headmatter: string
to
{'id': Value('int64'), 'case_name': Value('string'), 'case_name_full': Value('string'), 'date_filed': Value('string'), 'court_id': Value('string'), 'citation_count': Value('int64'), 'precedential_status': Value('string'), 'syllabus': Value('string'), 'judges': Value('string'), 'attorneys': Value('string'), 'docket_id': Value('int64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CourtListener Legal Dataset
Public mirror of CourtListener bulk legal data, converted to Parquet format for efficient querying and ML workflows.
Quick Start
from datasets import load_dataset
# Load opinion clusters (default - case metadata and summaries)
ds = load_dataset("drengskapur/courtlistener", split="train")
# Load a specific configuration
courts = load_dataset("drengskapur/courtlistener", "courts", split="train")
opinions = load_dataset("drengskapur/courtlistener", "opinions", split="train")
# Stream large datasets (recommended for opinions/dockets)
ds = load_dataset("drengskapur/courtlistener", "opinions", split="train", streaming=True)
for example in ds.take(10):
print(example["case_name"])
Efficient Querying (Avoid Rate Limits)
Use DuckDB for Server-Side Filtering (Recommended)
Query directly without downloading - filtering happens on HuggingFace servers:
import duckdb
conn = duckdb.connect()
conn.execute("INSTALL httpfs; LOAD httpfs;")
# Only download Supreme Court cases (server-side filter)
scotus = conn.execute("""
SELECT id, case_name, date_filed, citation_count
FROM 'hf://datasets/drengskapur/courtlistener/data/opinion-clusters/*.parquet'
WHERE court_id = 'scotus'
ORDER BY citation_count DESC
LIMIT 1000
""").df()
# Get opinions for specific clusters only
cluster_ids = scotus['id'].tolist()[:100]
opinions = conn.execute(f"""
SELECT id, cluster_id, plain_text
FROM 'hf://datasets/drengskapur/courtlistener/data/opinions/*.parquet'
WHERE cluster_id IN ({','.join(map(str, cluster_ids))})
""").df()
Use Streaming for Large Tables
Avoid loading entire datasets into memory:
from datasets import load_dataset
# Stream and filter - never loads full dataset
ds = load_dataset("drengskapur/courtlistener", "opinions", streaming=True, split="train")
# Process in batches with early stopping
results = []
for i, example in enumerate(ds):
if example.get("court_id") == "scotus":
results.append(example)
if len(results) >= 100:
break
Cache Results Locally
from datasets import load_dataset
# Download once, cache forever
ds = load_dataset(
"drengskapur/courtlistener",
"courts", # Small table - safe to download
split="train",
cache_dir="./hf_cache"
)
# For large tables, use DuckDB with local caching
import duckdb
conn = duckdb.connect("courtlistener_cache.db")
conn.execute("""
CREATE TABLE IF NOT EXISTS scotus_cases AS
SELECT * FROM 'hf://datasets/drengskapur/courtlistener/data/opinion-clusters/*.parquet'
WHERE court_id = 'scotus'
""")
Table Size Guide
| Config | Rows | Recommended Method |
|---|---|---|
courts |
~700 | load_dataset() - safe to download |
people-db-* |
~16K-30K | load_dataset() - safe to download |
citations |
~18M | DuckDB or streaming |
opinion-clusters |
~73M | DuckDB or streaming |
dockets |
~70M | DuckDB or streaming |
opinions |
~9M | DuckDB (large text) or streaming |
citation-map |
~76M | DuckDB only |
Available Configurations
Core Legal Data
| Config | Description | Rows | Size |
|---|---|---|---|
opinion-clusters |
Case metadata, summaries, citation counts | ~73M | ~2.5GB |
opinions |
Full opinion text (plain, HTML, XML) | ~9M | ~54GB |
courts |
Court metadata (700+ courts) | ~700 | ~100KB |
dockets |
RECAP docket metadata | ~70M | ~5GB |
citations |
Citation references to reporters | ~18M | ~1GB |
citation-map |
Citation graph edges | ~76M | ~500MB |
parentheticals |
Court-written case summaries | ~6.5M | ~300MB |
People & Financial Data
| Config | Description | Rows |
|---|---|---|
people-db-people |
Judge biographical information | ~16K |
people-db-positions |
Judge positions and appointments | ~30K |
people-db-schools |
Law school information | ~1K |
financial-disclosures |
Judge financial disclosure reports | ~1.7M |
Additional Tables
| Config | Description | Rows |
|---|---|---|
oral-arguments |
Oral argument audio metadata | ~200K |
fjc-integrated-database |
FJC federal case data | ~10M |
Embeddings
| Config | Description | Model |
|---|---|---|
embeddings-opinion-clusters |
Case metadata embeddings | BGE-large-en-v1.5 |
embeddings-opinions |
Opinion text embeddings | BGE-large-en-v1.5 |
API Access (No Downloads Required)
Query directly via HuggingFace Datasets Server API:
# Get rows
curl "https://datasets-server.huggingface.co/rows?dataset=drengskapur/courtlistener&config=courts&split=train&length=10"
# Search (full-text)
curl "https://datasets-server.huggingface.co/search?dataset=drengskapur/courtlistener&config=opinion-clusters&split=train&query=qualified%20immunity"
# Filter (SQL-like WHERE)
curl "https://datasets-server.huggingface.co/filter?dataset=drengskapur/courtlistener&config=courts&split=train&where=jurisdiction='F'"
Python Client with Rate Limit Handling
import time
import httpx
class CourtListenerAPI:
BASE = "https://datasets-server.huggingface.co"
def __init__(self, max_retries=5):
self.client = httpx.Client(timeout=30)
self.max_retries = max_retries
def _request(self, endpoint: str, params: dict):
for attempt in range(self.max_retries):
resp = self.client.get(f"{self.BASE}/{endpoint}", params=params)
if resp.status_code == 200:
return resp.json()
if resp.status_code == 429: # Rate limited
wait = 2 ** attempt
print(f"Rate limited, waiting {wait}s...")
time.sleep(wait)
continue
resp.raise_for_status()
raise Exception("Max retries exceeded")
def search(self, query: str, config: str = "opinion-clusters", length: int = 10):
return self._request("search", {
"dataset": "drengskapur/courtlistener",
"config": config,
"split": "train",
"query": query,
"length": length
})
def filter(self, where: str, config: str = "courts", length: int = 100):
return self._request("filter", {
"dataset": "drengskapur/courtlistener",
"config": config,
"split": "train",
"where": where,
"length": length
})
# Usage
api = CourtListenerAPI()
results = api.search("miranda rights", length=20)
federal_courts = api.filter("jurisdiction='F'", config="courts")
DuckDB Access
Query directly from DuckDB without downloading:
-- Install and load httpfs extension
INSTALL httpfs; LOAD httpfs;
-- Query Supreme Court cases
SELECT case_name, date_filed, citation_count
FROM 'hf://datasets/drengskapur/courtlistener/data/opinion-clusters/*.parquet'
WHERE court_id = 'scotus'
ORDER BY citation_count DESC
LIMIT 10;
-- Find all opinions citing a specific case
SELECT o.id, o.plain_text
FROM 'hf://datasets/drengskapur/courtlistener/data/opinions/*.parquet' o
JOIN 'hf://datasets/drengskapur/courtlistener/data/citation-map/*.parquet' cm
ON o.id = cm.citing_opinion_id
WHERE cm.cited_opinion_id = 12345;
Python Examples
Semantic Search with Embeddings
from datasets import load_dataset
import numpy as np
# Load embeddings
embeddings_ds = load_dataset(
"drengskapur/courtlistener",
"embeddings-opinion-clusters",
split="train"
)
# Load metadata
clusters = load_dataset(
"drengskapur/courtlistener",
"opinion-clusters",
split="train"
)
# Create a simple search index
def cosine_similarity(a, b):
return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
# Query embedding (get from your embedding model)
query_embedding = get_embedding("qualified immunity police")
# Find top matches
scores = [
(i, cosine_similarity(query_embedding, row["embedding"]))
for i, row in enumerate(embeddings_ds)
]
top_indices = sorted(scores, key=lambda x: -x[1])[:10]
for idx, score in top_indices:
print(f"{clusters[idx]['case_name']}: {score:.3f}")
Filter by Court and Date
from datasets import load_dataset
ds = load_dataset("drengskapur/courtlistener", "opinion-clusters", split="train")
# Filter to Supreme Court cases from 2020
scotus_2020 = ds.filter(
lambda x: x["court_id"] == "scotus" and
x["date_filed"] and
x["date_filed"].startswith("2020")
)
for case in scotus_2020:
print(f"{case['case_name']} ({case['date_filed']})")
Join Citations with Opinion Text
from datasets import load_dataset
# Load citation map and opinions
citations = load_dataset("drengskapur/courtlistener", "citation-map", split="train")
opinions = load_dataset("drengskapur/courtlistener", "opinions", split="train")
# Build opinion lookup (for small-scale use)
opinion_lookup = {op["id"]: op for op in opinions.take(10000)}
# Find what cites a specific opinion
target_opinion_id = 12345
citing = [c for c in citations if c["cited_opinion_id"] == target_opinion_id]
print(f"Found {len(citing)} citations")
Schema Details
Opinion Clusters
Key fields for case research:
id- Unique identifiercase_name,case_name_full- Case namesdate_filed- Filing datecourt_id- Court identifier (join withcourts)citation_count- Number of times citedprecedential_status- Published, Unpublished, etc.syllabus- Case summaryattorneys- Attorney informationjudges- Judges on the panel
Opinions
Key fields for full text:
id- Unique identifiercluster_id- Links to opinion-clustersplain_text- Plain text version (best for NLP)html- HTML version with formattingtype- Opinion type (majority, dissent, concurrence)author_id- Judge who wrote the opinion
Data Source
This dataset mirrors CourtListener bulk data from Free Law Project.
- Update Frequency: Daily (source data)
- Source: CourtListener S3 bulk data exports
- Format: Parquet with zstd compression
License
Public Domain Dedication and License (PDDL) - same as CourtListener source data.
Citation
@misc{courtlistener,
author = {Free Law Project},
title = {CourtListener},
year = {2024},
publisher = {Free Law Project},
howpublished = {\url{https://www.courtlistener.com/}},
}
- Downloads last month
- 1,854