Technology

Leveraging AI to transform public engagement and support informed, inclusive democratic decisions.

Empowering Deliberative Democracy with AI

ORBIS harnesses the power of AI to support deliberative democracy, making public engagement more accessible, comprehensible, and impactful.

AI-enhanced Tools for Deliberation

ORBIS has developed and implemented a suite of AI-powered tools designed to improve and scale deliberative democracy processes. These tools aim to make deliberative processes more accessible, understandable, and high-quality, expanding participation and deepening understanding of public opinions and policy debates.

Deliberative Platforms Powered by AI

The ORBIS toolkit is integrated into several project-specific deliberative platforms: BCause, PolisOrbis, and Democratic Reflection, each designed to meet unique deliberative needs. BCause structures debates into supporting and opposing views, displaying them through timelines and argument trees, and synthesizing discussions by highlighting relevant contributions. PolisOrbis, leveraging the open-source Pol.is framework, facilitates large-scale discussions using AI to instantly identify and display emerging patterns of consensus.

AI Components in the ORBIS Toolkit

The tools within the ORBIS toolkit each address a specific aspect of deliberation enhancement. Argument Mining (AM) focuses on automatically extracting and structuring argumentative components from texts, analyzing the construction of arguments to map reasoning across various discourses such as debates, legal documents, or online discussions.

Building Insights through AI

The Feedback Aggregation (FA) component clusters and summarizes discussions, extracts key phrases, and enriches data with relevant features to deliver concise, meaningful insights. This component processes the extensive inputs from discussions to identify dominant themes, sentiment trends, and emerging consensus, helping organizers and participants focus on key issues and refine decision-making processes. The Explanation Generator (EG) produces human-readable explanations of the data processed by AI components, enhancing the understanding of both citizens and institutions about how deliberative conclusions are derived.