Start a debate to see the discussion unfold
Configure which AI models to use for each phase of the debate.
Define custom model endpoints to use in debates.
Generate invite codes to share with others. Each code can only be used once.
Customize the look and feel of Perspectives to match your preferences.
Choose a color theme for the interface.
Select a font family for body text.
This affects the main interface text. The logo and monospace elements remain unchanged.
Advanced debugging and testing options.
Choose which token optimization profile to use for new debates.
Phase 1 provides ~48% token savings, Phase 2 provides ~52% savings.
Information about token tracking accuracy for the current debate.
No active debate. Token tracking information will appear here during a debate.
Set a maximum requests-per-minute (RPM) limit for web search queries. This throttles all web search requests made by you, regardless of where they're triggered.
Set to 0 for no limit, or specify a maximum number of web search requests per minute. Throttling is applied right before requests are sent to the search service.
Administrative tools and system overview.
Configure system-wide settings that affect all users.
Choose the web search provider used by all debates. This setting applies system-wide to all users.
View and manage all users in the system.
View emails on the waitlist.
Import a debate from a full export JSON file. Use this to duplicate debates across development and production environments.
Choose a full export JSON file downloaded from another environment
If checked, keeps the original debate UUID (may fail if duplicate exists). If unchecked, generates a new UUID.
View user feedback submissions.
Create scheduled debates that run automatically. Each automation deducts credits from your balance just like a manual debate.
Are you sure?
Local models run on your own computer using Ollama. API calls are made directly from your browser to your local machine.
Download and install Ollama from ollama.com/download
Open a terminal and download a model:
ollama pull llama3.2
See all available models at ollama.com/library
You must start Ollama with CORS enabled so your browser can connect to it.
Run this command in Terminal:
OLLAMA_ORIGINS="*" ollama serve
Keep this terminal window open while using Perspectives.
Run this command in your terminal:
OLLAMA_ORIGINS="*" ollama serve
Keep this terminal window open while using Perspectives.
Run this in Command Prompt or PowerShell:
set OLLAMA_ORIGINS=* && ollama serve
Or in PowerShell:
$env:OLLAMA_ORIGINS="*"; ollama serve
Keep this window open while using Perspectives.
http://localhost:11434/v1
llama3.2)
This debate is now accessible to anyone with the link, including users who are not logged in.
Download the analysis report as a PDF document.
Use these to explore multi-perspective analysis and see how AI agents debate complex topics from different viewpoints.
Choose a color theme that suits your style. You can change this later in Settings.
Enter any topic or question in the text box to start a multi-agent debate. Try questions like:
Select between two analysis tiers based on your needs:
Intelligent, efficient analysis
Much faster and more intelligent than Standard
Fine-tune your debate with modes and perspective sets:
This model requires authorization to access.
Incorrect password
Polymarket is currently pricing this prediction at:
This question has been classified as a multi-outcome prediction. The following candidate options were discovered. You can edit, add, or remove options before starting the debate.