COMMENT
Meeting assistants fed by AI such as Otter.ai, Zoom Ai Companion and Microsoft 365 Copilot promise increased productivity of employees and a reliable recording of discussions attending online meetings alongside or instead of participants. AI assistants can record the video and transcribe the audio, summarize notes and actions, provide analyzes and even coaches speakers on more efficient communication. But do the advantages prevail over associated security and confidentiality Risks?
Consider this: if a foreigner appeared in a meeting room, the intention to record the conversation and use this information for unknown purposes, would this person be authorized to proceed without question? Would the same conversation with the same deductible level occur? The answer, of course, is no. So why do companies authorize assistants at AI meetings to listen to conversations and collect potentially sensitive data?
Confidentiality of content
These applications have a significant risk of confidentiality and security for business information and those registered. The potential for improper use is an urgent concern that many organizations must still examine the best way to manage. This technology spreads faster than awareness of its risks, emphasizing the need for immediate action.
The first victim of the IA-listening could be the quality of the conversation. Employees who speak frankly about colleagues, managers, business and customers or investors could find themselves disciplined according to the transcription of the assistant, which could easily be removed from its context. In turn, the fear of how records could be used could also innovation and transparency of the scale.
The other risks include employees feeling obliged to consent to their will because a more senior colleague wants to use an assistant, and excessive dependence on the veracity of transcriptions, which can contain errors which, without control, become a record of do.
Online meetings often include the discussion of personal data, intellectual property, commercial strategy, unpublished information on a public enterprise or information on security vulnerabilities, which could all cause legal, financial and financial headaches and reputation, if they were disclosed. Existing tools to stop leaks, such as data loss prevention systems, would not prevent data from leaving control of the organization.
There is a considerable potential of unauthorized access or of improper use of recorded conversations. Although corporate solutions can offer some control through administrative guarantees, third -party applications often have fewer protections, and it may not always be clear how or where a supplier will store data, how long, who will have access to it, or how the service provider could use it.
Confidentiality and security often a reflection after the fact
Some transcription tools may authorize the supplier to ingest and use data for other purposes, such as the formation of algorithm. Users of the ZOOM virtual meeting supplier complained last year after a Updating of zoom conditions of use has led to concerns that customer data would be used to train the company’s AI algorithm. Zoom was forced to update his terms and clarify how and when customer data would be used for product improvement purposes.
Zoom’s previous data confidentiality problems serve as a brutal recall of the potential consequences. A Survey settled by the Federal Trade Commission Investigation by the Federal Trade Commission and a Followed by $ 86 million in court in terms of current confidentiality has shown that rapid growth startups can ignore confidentiality and data security.
Companies in this space can also end up inadvertently a target for pirates determined to access thousands of hours of corporate meetings. Any leak, whatever the content, would be harmful in a renowned manner for the supplier and the customer.
The AI revolution does not stop at online meetings. Gadgets, like Humane portable pineAdvance the assistant concept and can record any interaction throughout the day and treat content. In such cases, it seems even less likely that spindle users will continually ask other parties the consent each time, easily exposing sensitive conversations.
Legal considerations
Key legal consideration with regard to AI assistants is consent. Most AI assistants include a clear and visible registration consent mechanism to comply with laws like California Invasion of Privacy Act, making it a crime to record the voice of a person to his INSU nor its consent. However, legal requirements vary: 11 states in the United States, including California, have laws on “all-party” consent, forcing all participants to agree to be registered, while others have laws on Consent “to a single party”, where only one participant – as a rule, the one who makes the registration – must consent.
Map of consent states to all parties and supporters
By taking these proactive measures, companies can use the advantages of AI assistants, while protecting their sensitive information and maintaining confidence with employees and customers. By preventing incidents before occurring and ensuring that the integration of AI into meetings improves productivity without compromising confidentiality and security, we can improve and revolutionize team collaboration.
Participants in online work meetings could assume confidentialityBut it often depends on business policies and jurisdiction. In the United States, the confidentiality of the workplace is generally limited by business policies. On the other hand, the European Union and its Member States, in particular Germany and France, offer greater protections on privacy at the workplace.
Failure to comply with registration laws can lead to criminal liability, which is rarely applied, and civil damage and sanctions, which are often disputes. More than 400 cases linked to illegal recordings were deposited in California only this year, with thousands of others in arbitration, although none is linked to AI assistants – however.
Manage the disc
As AI assistants are becoming more and more integrated into professional and personal spheres, managers cannot overestimate the emergency to respond to confidentiality and security problems. To manage risks, companies must quickly bring together the dedicated teams to assess emerging technologies and to document policies and to socialize them through the organization.
A complete policy should describe the authorized use of AI assistants, consent requirements, data management and data protection protocols and clear consequences for violations. The continuous updates of these policies are essential as technology evolves, and in parallel, there is a critical need to educate employees on potential risks and encourage a culture of vigilance.