Heinz Moser: Bildung und Medienpädagogik unter dem Aspekt einer Refeudalisierung des Kapitalismus
Als neue Krise der Öffentlichkeit wird gegenwärtig eine Refeudalisierung beschrieben, die aus dem digitalen Kapitalismus hervorgeht. Im politischen Raum könnte auf das Beispiel des US-Präsidenten Donald Trump und seiner Beziehung zu den Technologie-Pionieren des Silicon Valleys verwiesen werden. Theoretisch hat der ehemalige griechische Aussenminister Yanis Varoufakis eine neue gesellschaftliche Ordnung des Technofeudalismus skizziert. In meinem Tagungsbeitrag möchte ich das Konzept einer Refeudalisierung der Gesellschaft erörtern und den Bezug zu Fragen der Bildung und und seine Relevanz für Bildung und Medienpädagogik.
**AI Summary**
The presentation critically evaluates current media education frameworks, highlighting their inadequacy in addressing emerging threats from tech oligarchs and authoritarian structures. Key points include:
1. **Challenges to Media Education**:
- The field is seen as overly vague, lacking clear outcomes or regulatory rigor.
- Critics argue that historical approaches fail to address the complex interplay of digital technologies, surveillance capitalism, and political power.
2. **Emerging Authoritarianism in Tech**:
- Analogies to Trump's administration highlight how tech companies (e.g., "Frisk Corp.") are reshaping governance as "techno feudalism," consolidating power via algorithms while suppressing democratic dialogue.
- Terms like "technophitalism" encapsulate the fusion of tech enthusiasm with plutocratic rule, undermining traditional freedoms.
3. **Global and Structural Critiques**:
- Tech platforms regulate users "costlessly," eroding public discourse and fostering echo chambers.
- Calls for a "public legal internet" to counterbalance private monopolies, emphasizing transparency and equitable access.
4. **Historical and Political Contexts**:
- Trump's "erratic" leadership is framed as a precursor to broader illiberalism, with tech companies acting as tools for censorship or propaganda.
- Critiques extend to EU regulations (e.g., AI management) and the dominance of high-tech oligarchs shaping global narratives through algorithms.
5. **Reimagining Media Pedagogy**:
- Advocacy for independent public sectors/Platforms to mediate between private tech and state power, prioritizing open-source models.
- Emphasis on balancing innovation with democratic oversight, fostering a "public legal internet" as a bulwark against surveillance capitalism.
6. **Outlook**:
- While challenges persist, the speaker remains committed to revitalizing medium pedagogy through education, policy, and inclusive digital design. The goal is to create resilient systems that counter authoritarian tech narratives and protect democratic values.
The talk underscores the need for a robust, reimagined media education focused on countering tech-driven authoritarianism. It calls for transparent regulations, independent public platforms, and equitable digital policies to safeguard democracy against surveillance capitalism and oligarchic control. Key solutions include fostering critical tech literacy, supporting decentralized digital governance, and integrating ethical design principles into technology development
Oliver Leistert: KI als Prekarisierungsmaschine
After:
(5min)
(10min)
(20min)
(30min)
(srt broken)
Die Durchdringung aller nur erdenklichen gesellschaftlichen Poren durch KI (LLMs) hat den Effekt einer Prekarisierung von Wissen, Arbeit und vielem mehr. Im Beitrag wird die Frage gestellt, welchen Arten von Prekarisierung (z.B. epistemische Prekarisierung, ökologische Prekarisierung) entstehen, oder sich zu bereits bestehenden hinzufügen und diese verändern, und welches Bild sich ergibt, wenn wir ihren Zusammenhang betrachten. Über den Begriff der Prekarisierung lässt sich, so die These, der power grab der KI Industrie gut darstellen
**AI Summary (after ~5min)**
The speech centers on enhancing media activism by integrating strategic resource management and critical frameworks. Key elements include:
Infrastructure & Tools: PRIKA and cross-section technology are highlighted for systematic planning, with practical roles like logistics support (e.g., Oliver’s involvement). Stickers symbolize tangible resources.
Conceptual Frameworks:
Precarity Analysis: Redefines "precariat" as "provenimental," drawing from Bourdieu, Butler, and Lorey to emphasize systemic vulnerabilities in labor, care, and security.
Lorey’s Governmentality: Critiques neoliberal precarization as a tool for governance, highlighting how insecurity is both repressive and productive.
Critical Insights:
Digital technologies are viewed as surveillance mechanisms altering social dynamics (e.g., social media's solitary consumption).
Precariousness is contextualized globally, critiquing overemphasis on privileged models of security.
Education and critical theory are proposed to address complex power structures in media contexts.
The summary bridges academic theories with practical activism, advocating for a nuanced approach to systemic challenges.
Key Takeaways:
Infrastructure tools like PRIKA align strategic planning with tech integration.
The "provenimental" framework reinterprets precarity through Bourdieu, Butler, and Lorey.
Digital surveillance is analyzed as part of broader governance mechanisms.
Holistic education is emphasized to challenge neoliberal power dynamics.
This synthesis aims to equip activists with both tactical resources and critical analysis for effective engagement.
**AI Summary (after ~10min)**
The speech synthesizes critical theories and media activism to address systemic precarity, structured as follows:
Precarity Redefined:
Uses Bourdieu’s "provenimental" framework, Butler’s feminist critiques, and Lorey’s governmentality to highlight labor/care/security vulnerabilities under neoliberalism.
Technology & Surveillance:
AI and digital tools (e.g., chatbots) are depicted as surveillance mechanisms that individualize blame, fragmenting collective resistance. Exploitative tech narratives for capital gain are critiqued.
Critical Analysis of Power Structures:
Lorey’s concept of ambivalent precarity explains how insecurity both perpetuates and resists power. Care work and security are integrated into analyses beyond economic exploitation.
Strategic Tools for Activism (PRIKA):
PRIKA and cross-section technology offer a systemic approach to resource management, linking tactical activism with strategic planning.
Educational Role:
Emphasizes holistic education to challenge neoliberal dynamics and cultivate critical awareness.
Key Takeaways:
Employ PRIKA for strategic organizing; avoid celebratory tech narratives.
Use the "provenimental" framework to analyze precarity holistically.
Critique AI as a governance tool and address ethical risks.
Prioritize education to dismantle systemic inequalities.
The speech bridges theory (Bourdieu, Butler, Lorey) with practice (PRIKA), advocating for intersectional, critical activism against neoliberal precarity.
**AI Summary (after 20min)**
The speech explores the multifaceted challenges of Large Language Models (LLMs) in artificial intelligence (AI), emphasizing technical, ecological, ethical, and educational concerns. Key points include:
Technical Challenges:
Bandwidth Intensity: LLMs demand massive computational resources, raising environmental and infrastructure burdens.
Transparency Issues: Outputs are "black-boxed," limiting user understanding of training data influence and decision-making.
Ethical Concerns:
Reliability & Bias: AI-generated content may lack accuracy, perpetuating biases due to opaque datasets and flawed accountability mechanisms.
Censorship Risks: Content filters could be misused for ideological control or propaganda under guise of safety.
Educational Impact:
Homework Cheating: Overuse of LLMs like GPT-4 undermines critical thinking by producing contextually irrelevant, unverified outputs.
Ecological and Political Issues:
Environmental impact of LLM training is highlighted, with concerns about government surveillance via crawlers (e.g., "KAIs").
Proposed Solutions:
Policy & Regulation: Enforce data transparency laws, ethical standards, and unbiased datasets; mandate bias tracking and accountability metrics.
Corporate Action: Incentivize ethical data use through taxes and fund interdisciplinary AI research.
Education Reform: Integrate AI literacy focusing on critical evaluation and ethics training.
Public Engagement: Promote citizen science for bias correction and media campaigns against unchecked filters.
Timeline:
Short-term: Draft transparency mandates and pilot AI education programs.
Medium-term: Offer tax incentives for ethical practices; fund research.
Long-term: Develop global ethics frameworks with stakeholder collaboration.
By addressing these challenges through collaborative efforts, stakeholders aim to harness AI’s potential equitably and sustainably while mitigating risks. Modalities for human-AI interaction are also explored, stressing hybrid approaches combining structured programming with intuitive prompting.
**AI Summary (after 30min)**
The speech critically examines the multifaceted challenges of Artificial Intelligence (AI), particularly Large Language Models (LLMs), across technical, ethical, ecological, and socioeconomic dimensions. Key insights and proposed solutions are structured as follows:
Ethical & Ecological Concerns:
Transparency: AI systems operate as "black boxes," limiting accountability for biased outputs.
Ecological Impact: High energy consumption during setup/training contributes to environmental strain.
Academic & Misuse Issues:
Homework Cheating: Tools like GPT-4 undermine critical thinking and academic integrity.
Content Filters: Risk ideological control while failing to address systemic biases in training data.
Global Labor Dynamics:
Colonial Power: Data labeling tasks outsourced to the Global South perpetuate inequities.
Automation Displacement: Roles in education, finance, and media sectors face precarity due to AI automation.
Decision-Making Shifts:
Accountability Gaps: AI handles critical decisions (e.g., military nominations), shifting responsibility away from humans.
Economic & Structural Changes:
Neoliberalism: Prioritizes individual reliance over systemic reforms, masking labor/resource exploitation.
Proposed Solutions:
Policy/Regulation: Enforce transparency laws with tax incentives for bias tracking.
Corporate Action: Fund interdisciplinary research and ethical AI use.
Education Reform: Integrate AI literacy with ethics training.
Public Engagement: Promote transparent media campaigns and citizen science for bias correction.
Timeline:
Short-term: Draft transparency mandates and educational pilots.
Medium-term: Implement tax incentives; support research.
Long-term: Develop global ethics frameworks through stakeholder collaboration.
Economic Impact:
AI reshapes jobs in sectors like education, ICT, finance, and journalism, particularly affecting middle management and white-collar roles. Office work often deviates from process guidelines, creating opacity in error accountability.
Climate & Colonialism:
Large-scale AI infrastructure’s carbon footprint is significant but underreported. Data labeling tasks perpetuate Global South exploitation. EU/China initiatives address these issues vaguely.
Conclusion:
Advocates for hybrid human-AI approaches, emphasizing equitable adoption while addressing socioeconomic disparities. Encourages active public participation in shaping AI outcomes to mitigate harm and promote equity.
**AI Summary (using old srt)**
1. **AI Labor Market Implications:**
- **Automation Bias:** AI focuses on cognitive tasks, potentially displacing white-collar workers in roles like data analysis or process oversight.
- **Global South Disparity:** Low-skilled "gig jobs" dominate AI infrastructure development (e.g., data annotation), perpetuating colonial power dynamics through unacknowledged labor contributions.
2. **Ethical and Accountability Challenges:**
- **Decision-Making Responsibility:** AI-driven decisions (e.g., military targeting) delay accountability, raising ethical concerns about oversight and justice.
- **Bias in Training Data:** Overlooked biases in datasets can corrupt outcomes if unchecked, necessitating rigorous vetting processes.
3. **Environmental Concerns:**
- **Energy Consumption:** High reliance on coal-powered data centers (e.g., Apple’s facilities) contributes to carbon footprints and resource depletion.
- **Regulatory Gaps:** EU regulations fail to address AI's ecological impact comprehensively, risking long-term sustainability issues.
4. **Colonial Dynamics in Technology:**
- **Cognitive Task Displacement:** AI threatens to centralize knowledge work (e.g., training models) while devaluing human analysis.
- **Decentralized Labor Exploitation:** Global south populations bear the brunt of AI infrastructure without adequate recognition or compensation.
5. **Critical Reflections:**
- **Mixed Perspectives:** While acknowledging AI’s transformative potential, participants critique its socioeconomic and environmental risks.
- **Call for Equity:** Stress on transparent policies, equitable resource distribution, and inclusive design to mitigate negative impacts.
**Conclusion:**
The dialogue underscores the need for a balanced approach to AI development—emphasizing ethical governance, transparency, and equity. Addressing labor displacement, decarbonization efforts, and systemic biases is critical to harnessing AI’s benefits while avoiding societal harm. Policymakers must prioritize inclusive frameworks that integrate technological progress with sustainable practices and ethical responsibilities.