Meta TRIBE v2 · A100 GPU

See your brain react to anything.

Submit a YouTube link, video, audio, or text. A real neural encoding model predicts which 368 brain regions activate — millisecond by millisecond — across vision, language, and sound.

YouTube URLVideo fileAudioText
20,484 vertices|368 annotated regions|1.5 s TR
scroll
Primary Visual Cortex (V1)Fusiform Face ComplexMT/V5 — Motion AreaBroca's Area (44 / 45)Wernicke's AreaPrimary Auditory CortexAnterior Cingulate CortexParahippocampal Place AreaDorsolateral Prefrontal CortexSupplementary Motor AreaInsulaAngular GyrusSuperior Temporal SulcusRetrosplenial CortexOrbitofrontal CortexPrimary Visual Cortex (V1)Fusiform Face ComplexMT/V5 — Motion AreaBroca's Area (44 / 45)Wernicke's AreaPrimary Auditory CortexAnterior Cingulate CortexParahippocampal Place AreaDorsolateral Prefrontal CortexSupplementary Motor AreaInsulaAngular GyrusSuperior Temporal SulcusRetrosplenial CortexOrbitofrontal Cortex
20,484cortical verticesfsaverage5 surface
368annotated regionsHCP MMP1 parcellation
3input modalitiesvideo · audio · text
1.5 stemporal resolutionTR (repetition time)

The problem

How neurologically stimulating is your content?

Before you post, wouldn't you want to know how sensationalizing or brain-activating your audio or video actually is? NeuralPrint shows you exactly which brain regions light up — before a single viewer watches.

🧠
You can't see how stimulating your content really is
📊
Views & clicks don't reveal neurological impact
🎯
Focus groups can't measure brain activation
🤷
Every publish is an educated guess

Pipeline

From stimulus to brain map

The full pipeline runs end-to-end in under a minute on GPU.

01

Submit any stimulus

Paste a YouTube URL, upload a video or audio file, or type raw text. NeuralPrint handles transcription, audio extraction, and chunking internally.

yt-dlp · Whisper · gTTS
02

GPU extracts features

Three foundation models run on an A100 GPU via Modal: Llama-3.2 for language, Wav2Vec-BERT for audio, V-JEPA2 for video. Features are cached — reruns are instant.

~30 s · A100 · cached
03

TRIBE v2 maps the brain

Meta's brain encoder maps multimodal features onto 20,484 cortical vertices — producing a time-resolved fMRI-like prediction at 1.5 s TR resolution.

20,484 vertices · 1.5 s TR
04

Explore the response

3D brain viewer, region-by-region breakdown, timeline scrubber, modality contribution map, and plain-English AI observations across 368 annotated regions.

3D · Timeline · Insights

The output

More than a brain map — actionable intelligence.

Choose your role and optimization objective. NeuralPrint returns a scored analysis with region-by-region breakdown, neural highlights, quiet zones, and AI-generated recommendations.

Scoring objectives

Emotional ImpactEngagementRetentionCalm & WellbeingPersuasion
B+

NeuralPrint Score™

❤️ Emotional Impact · 🎬 Content Creator

78 / 100

Top activated regions

Cingulate Cortex
89%
Fusiform Face Area
76%
Amygdala
71%
Broca's Area
68%
Motor Cortex
44%

⬡ AI recommendation

Front-load your emotional hook — move the cliff scene to the opening 3 seconds. The amygdala response peaks when emotional stimuli appear early.

Built for

Every content professional

🎬

Content Creators

Optimize hooks, pacing, and emotional beats. Know which frame triggers the amygdala before you publish.

Engagement · Emotional Impact

📣

Marketers

Measure neural persuasion and validate creative before spending ad budget. Predict purchase intent at the brain level.

Persuasion · Attention

📚

Educators

Maximize knowledge retention and optimize lesson structure to ensure memory encoding fires in the hippocampus.

Learning · Retention

🧠

Therapists

Design guided meditations and therapeutic content that promote neural calm — validated by the default mode network.

Calm · Wellbeing

Technology

Powered by frontier research

TRIBE v2 from Meta FAIR combines three state-of-the-art encoders into a unified brain prediction architecture.

V-JEPA2

Video

Wav2Vec-BERT

Audio

LLaMA 3.2

Text

Transformer

Fusion

Subject Block

Cortical Map

20,484 vertices
fsaverage5 surface

META FAIR

TRIBE v2 Model

MODAL

GPU Inference

HUGGING FACE

Model Weights

NVIDIA

A100 · 40 GB

Ready?

What is your brain doing right now?

Pick any video, podcast episode, or book passage. In under a minute, see the neuroscience.

Start analyzing

Runs on A100 GPU via Modal  ·  No account required