I will keep it short because I really don’t know what to think about it again, apart from I am a little frightened. Google Gemini The AI can now generate an extract from Podcast with a rather convincing sound generated from websites, downloaded documents, etc. It is actually a newly added feature to their Notebook LLM notes application application. Connect simply, create a new laptop, add the websites (s), text documents or collection in the text, and this will generate things like a written FAQ, a study guide or an information document. Recently added is the “audio presentation” functionality which (from the useful contextual window on the site):
Audio glimps are “deep dive” discussions that summarize the key subjects of your sources. This is an experimental feature and below are a few notes to help you start:
- Audio previews (including votes) are generated by AI, so there can be inaccuracies and audio problems.
- Audio previews are not a complete or objective view of a subject, but simply a reflection of your sources.
In short, it creates an audio file that looks like an extract from a podcast, with two “people” discussing source subjects. The conversation generated by rather realistic sounds and for my simple ears, is quite difficult to distinguish from a conversation with real people … at least when listened to with casualness.
For example, I took today Abreputation of Roundup Experttraveled the audio overview and downloaded the audio on YouTube. You can listen to it here:
Honestly, I don’t know what to think about it yet, but with the climb Deep in depths And the idea that we can No longer automatically trust the photographsI can’t help but think about how this technology can / will be used for sick purposes instead of good.
I just wanted to attract this for the KOS community and I am interested in your thoughts.