Daily Archives: May 2, 2023

17. Carthage – Empire of the Phoenicians


Fall of Civilizations – Apr 11, 2022

[AUDIO ONLY]

Buried beneath the city streets of the Tunisian capital of Tunis, an ancient city lies forgotten…

In this episode, we look at one of the most dramatic stories to come down to us from the ancient world: the rise and fall of the empire of Carthage. Find out how this city rose out of the Phoenician states of the Eastern Mediterranean, and set out on voyages of discovery and settlement that put them at the centre of the ancient world. And hear how the city of Carthage was destroyed, and its memory nearly wiped from the earth.

SOURCES: https://www.patreon.com/posts/sources…

Credits:

Written and produced by Paul Cooper Sound engineering by Alexey Sibikin

Original music by Pavlos Kapralos: / @pavloskapralos3969

Sass Hoory: percussion Lelu Blesa: vocals Anastasia Papadopoulou: vocals June Filetti: oboe Pavlos Kapralos: oud, vocals, flutes, instrument sampling and editing

Voice actors:

Michael Hajiantonis Lachlan Lucas Alexandra Boulton Simon Jackson Tom Marshall-Lee Chris Harvey, Nick Denton Paul Casselle

Join this channel to get access to perks: / @fallofcivilizations

Full interview: “Godfather of artificial intelligence” talks impact and potential of AI

CBS Mornings Mar 25, 2023 #artificialintelligence #tech
Geoffrey Hinton is considered a godfather of artificial intelligence, having championed machine learning decades before it became mainstream. As chatbots like ChatGPT bring his work to widespread attention, we spoke to Hinton about the past, present and future of AI. CBS Saturday Morning’s Brook Silva-Braga interviewed him at the Vector Institute in Toronto on March 1, 2023.

See related:

as well as:

Guardian-AI

·

The Future of Migration


World Economic Forum – May 2, 2923

Levels of migration are being driven higher by changes in patterns of global economic activity and inequality.

What policies and strategies will lead to the best outcomes for prospective migrants, as well as for their origin, transit and destination countries?

Beauty or The Beast : The True Cost of ChatGPT

GBH Forum Network May 1, 2023 CAMBRIDGE

Recent concerns about the long-term implications of artificial intelligence apps like Chat GPT have prompted journalists, academics and entrepreneurs to seek a temporary halt to the training of AIs saying “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” In this Forum, we consider the direct and also unseen impacts of utilizing a tool that has yet to be regulated or even fully understood.

Gary Marcus, scientist, entrepreneur, and author of “Rebooting AI: Building Artificial Intelligence we can trust” is Professor Emeritus of Psychology and Neural Science at NYU and host of the podcast Humans versus Machines. Jane Rosenzweig is Director of the Harvard Writing Center, freelance writer and author of Writing Hacks newsletter. Wesley Wildman is a Professor of Philosophy, Theology, and Ethics + Computing & Data Sciences at Boston University.. Andrew Kimble, Director of Online Lifelong Learning at BU School of Theology, will act as moderator.

Contents of the video – 0:00:00

Introduction 0:00:32

Program start 0:02:19

Discussion Start 0:39:21

Q&A Start 0:53:48

See related:

as well as:

Guardian-AI

·

Letter signed by Elon Musk demanding AI research pause sparks controversy | Artificial intelligence (AI) | The Guardian

A letter co-signed by Elon Musk and thousands of others demanding a pause in artificial intelligence research has created a firestorm, after the researchers cited in the letter condemned its use of their work, some signatories were revealed to be fake, and others backed out on their support.

On 22 March more than 1,800 signatories – including Musk, the cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak – called for a six-month pause on the development of systems “more powerful” than that of GPT-4. Engineers from Amazon, DeepMind, Google, Meta and Microsoft also lent their support.

Developed by OpenAI, a company co-founded by Musk and now backed by Microsoft, GPT-4 has developed the ability to hold human-like conversation, compose songs and summarise lengthy documents. Such AI systems with “human-competitive intelligence” pose profound risks to humanity, the letter claimed.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter said.

The Future of Life institute, the thinktank that coordinated the effort,cited 12 pieces of research from experts including university academics as well as current and former employees of OpenAI, Google and its subsidiary DeepMind. But four experts cited in the letter have expressed concern that their research was used to make such claims.

When initially launched, the letter lacked verification protocols for signing and racked up signatures from people who did not actually sign it, including Xi Jinping and Meta’s chief AI scientist Yann LeCun, who clarified on Twitter he did not support it.

Critics have accused the Future of Life Institute (FLI),which has received funding from the Musk foundation, of prioritising imagined apocalyptic scenarios over more immediate concerns about AI – such as racist or sexist biases being programmed into the machines.

Among the research cited was “On the Dangers of Stochastic Parrots”, a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google. Mitchell, now chief ethical scientist at AI firm Hugging Face, criticised the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.

“By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI,” she said. “Ignoring active harms right now is a privilege that some of us don’t have.”

Her co-authors Timnit Gebru and Emily M Bender criticised the letter on Twitter, with the latter branding some of its claims as “unhinged”. Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with her work being mentioned in the letter. She last year co-authored a research paper arguing the widespread use of AI already posed serious risks.

Her research argued the present-day use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.

She told Reuters: “AI does not need to reach human-level intelligence to exacerbate those risks.”

“There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention.”

Asked to comment on the criticism, FLI’s president, Max Tegmark, said both short-term and long-term risks of AI should be taken seriously. “If we cite someone, it just means we claim they’re endorsing that sentence. It doesn’t mean they’re endorsing the letter, or we endorse everything they think,” he told Reuters.

  • Reuters contributed to this report

The original version of this story stated that the Future of Life Institute (FLI) was primarily funded by Elon Musk. It has been updated to reflect that while the group has received funds from Musk, he is not its largest donor.

See related:

as well as:

·

BBC World Service – Newshour, 02/05/2023 13:06 GMT

Newshour-AI

Click here to listen

A man widely regarded as the godfather of artificial intelligence (AI) has quit his job at Google, warning about the risks posed by the technology he helped to develop. Dr Geoffrey Hinton joins a growing number of experts sharing their concerns about the speed at which AI is developing.

See related:

as well as:

·

LIVE: Noam Chomsky and Amy Goodman Discuss Inequality

GBH Forum Network Streamed live on Apr 24, 2017

Join us LIVE FROM CAMBRIDGE for a discussion between Noam Chomsky, MIT’s Professor of Linguistics and Philosophy, and Amy Goodman, host of the award-winning independent news program Democracy Now! The two will talk about the unprecedented rate of inequality as examined in Chomsky’s newest book (and documentary film), Requiem for the American Dream: The 10 Principles of Concentration of Wealth & Power. Presented in partnership with Harvard Book Store. WGBH Forum Network ~ Free online lectures: Explore a world of ideas See our complete archive here: http://forum-network.org

Special Interview: Noam Chomsky

TeleSUR English Feb 16, 2018

Noam Chomsky on humanitarian intervention in an interview with teleSUR

The Empire Files: Noam Chomsky on Electing The President of an Empire

TeleSUR English Oct 24, 2015

At the Massachusetts Institute of Technology in Cambridge, Mass., Abby Martin interviews world-renowned philosopher and linguist Professor Noam Chomsky. Prof. Chomsky comments on the presidential primary “extravaganza,” the movement for Bernie Sanders, the U.S.-Iran nuclear deal, the bombing of the Doctors Without Borders hospital in Kunduz, Afghanistan, modern-day libertarianism and the reality of “democracy” under capitalism.

Economic Update: Noam Chomsky on Fragile US Empire

Democracy At Work Aug 8, 2022 Economic Update with Richard D. Wolff [Full Episodes]

[S12 E30] Noam Chomsky on Fragile US Empire In this week’s show, Prof. Wolff gives updates on US freight workers strike preparations; progressives and labor targeting municipal government; Chipotle store-closing to stop unionizing, and Occupy Wall Street’s “Debt Collective” $5.8 billion student loan forgiveness win. In the second half of the show, Prof. Wolff interviews Noam Chomsky on the decline and fragility of the US empire, the role of US military, and the rise of fascism as a coping mechanism.