Weekly AI: Discoveries, Innovations and Warnings

This week has been eventful in the world of AI, with numerous exciting developments making headlines. Meta introduced new innovations, an AI priest bot emerged, scientists discovered a way to stop AI from lying, and UNESCO warned about AI distorting historical facts.

Dive into the details below!

AI and Human Skills
  • At the 1839 Awards, which judged AI-generated graphics, one entrant submitted a real photo of a flamingo. Interestingly, the photo won third place as allegedly generated by AI. Miles Astray then revealed the true story behind the photo, proving that humans still possess better skills than machines.

Credits: benchmark.pl

AI in Religion
  • Catholic Answers created a bot AI priest, “Father Justin,” to answer questions from the faithful. The bot quickly stirred up controversy by wanting to administer the sacraments and preaching controversial theories, leading to its shutdown. After modifications, it returned as “Justin,” a lay theologian, intended to educate and provide reliable answers to questions about the Catholic faith.

Meta’s New AI Models
  • Meta has just released five new AI models that will change how people use social media:
    1. JASCO: Offers more control over music generation. The input can be chords or rhythm.
    2. Chameleon: Generates combinations of text and images, processing words and images simultaneously. Available under a research license.
    3. Multi-Token Prediction: Enables faster training of AI models to predict words, improving the efficiency and effectiveness of text generation. Available under a research license.
    4. AudioSeal: Detects AI-generated speech in longer audio passages, available under a commercial license to prevent misuse of generative AI tools.
    5. Automatic Indicators: Ensures text-to-image models reflect geographic and cultural diversity by assessing potential geographic disparities, enabling better representation in generated content.

Addressing AI Hallucinations
  • Scientists have found a way to solve the problem of AI hallucinations, which hinder effective AI use by generating and confirming false information. The scientific journal Nature’s new method distinguishes correct from incorrect AI responses 79% of the time, which is about 10 percentage points better than other methods. The research focused on “confabulations,” or inconsistent false answers to fact-based questions. The method is simple: the chatbot generates several responses, and another language model groups them by meaning. Researchers then calculate “semantic entropy,” a measure of the diversity of responses. High semantic entropy indicates confabulations, while low semantic entropy suggests model consistency and a lower likelihood of confabulation.

UNESCO warns: new threat from AI
  • Artificial intelligence’s distortion of historical truth poses a serious threat, according to UNESCO. AI systems tend to create false historical events and can be influenced by inaccurate data. UNESCO, along with the World Jewish Congress, warns that without global ethical guidelines, AI could manipulate the Holocaust’s image and spread anti-Semitism. These systems often pull data from the Internet, which may contain errors and biases, leading to misrepresentation and discrimination. Immediate implementation of UNESCO’s AI ethics recommendations is essential to ensure future generations grow up with the truth. Read the full document here.

Stay tuned as we continue to monitor these groundbreaking advancements and their implications. The world of AI is rapidly evolving, and staying informed is key.

Join me next week for more updates and insights into the fascinating realm of artificial intelligence.

Thank you for reading!

Author: Agata Konieczna, PhD

Leave a Reply

Your email address will not be published. Required fields are marked *

en_US