GDPR and Voice Dictation: What Irish Lawyers Need to Get Right
When a solicitor dictates an attendance note about a family law consultation, the audio recording contains personal data. It may contain the client’s name, their spouse’s name, details of their children, financial information, and allegations of behaviour. When that audio is sent to a server for transcription, the personal data goes with it.
This is obvious when stated plainly, but it’s remarkable how many legal practitioners use dictation tools without considering where the audio data is actually processed. The answer, for most popular tools, is the United States.
Voice recordings are personal data
Under GDPR, personal data is “any information relating to an identified or identifiable natural person.” A voice recording of a solicitor discussing a client matter is unambiguously personal data — it identifies the speaker and typically identifies the client and other parties by name.
Audio recordings also constitute biometric data under certain interpretations. Article 9 of GDPR treats biometric data as a special category requiring additional protections when used for identification purposes. While dictation tools typically aren’t using voice data for identification, the classification of voice recordings in the GDPR landscape is an area of ongoing regulatory attention.
The key point for practitioners is simple: when you dictate into any tool, the audio and the resulting transcript are personal data subject to GDPR. Where that data is processed and stored matters.
Where popular tools process your data
Here’s where the audio goes when you dictate into the most commonly used tools:
Apple Dictation (Siri). Apple processes dictation requests on-device for short dictations on newer iPhones, but longer dictations and certain features still route through Apple’s servers. Apple’s data processing infrastructure is global, including US facilities. Apple’s privacy practices are relatively strong, but the data isn’t guaranteed to stay in the EU.
Google Voice Typing. Google processes all voice-to-text through Google Cloud infrastructure. Google Cloud has EU regions, but Google’s terms of service for consumer products don’t guarantee EU-only processing. For a standard Android phone dictation, the audio data may be processed in the US.
Dragon NaturallySpeaking / Dragon Legal. Dragon processes voice data through Nuance infrastructure, now owned by Microsoft. Microsoft’s cloud infrastructure is global. Dragon’s enterprise products offer some data residency options, but the standard Dragon Legal product for individual users does not provide an EU-only processing guarantee.
Otter.ai. Otter processes data in the United States. Its privacy policy explicitly states that data is stored and processed in the US. Otter has no EU data residency option for individual users.
OpenAI Whisper (via API). OpenAI’s API processes data through infrastructure that includes US facilities. While OpenAI offers a data processing agreement, the processing isn’t EU-only by default.
Microsoft 365 Dictate. Microsoft 365 dictation uses Azure Cognitive Services. Microsoft’s enterprise agreements can specify EU data residency (the “EU Data Boundary” commitment), but this applies to enterprise licences with specific configurations — not to standard Microsoft 365 subscriptions.
Why this matters for lawyers specifically
Every profession handles personal data, but lawyers occupy a particular position. Legal professional privilege — solicitor-client privilege in the Irish context — imposes obligations that go beyond standard GDPR compliance.
A solicitor who dictates a note about a client’s criminal defence, a family law dispute, or a commercial transaction is handling information that is:
- Personal data under GDPR (relating to identifiable individuals)
- Privileged material under the law of legal professional privilege
- Confidential under the solicitor’s professional duty of confidence
When this audio is processed through US infrastructure, it passes through a jurisdiction where:
- US law enforcement may be able to access it under the CLOUD Act, potentially without the data subject’s knowledge
- The protections of GDPR do not apply directly
- The adequacy framework between the EU and US (currently the EU-US Data Privacy Framework) has been successfully challenged before and may be challenged again
For barristers, the position is similar. Instructions from solicitors, consultations with clients, and draft opinions all contain privileged and confidential information. Routing this through non-EU infrastructure creates a data protection exposure that is difficult to justify when EU-only alternatives exist.
What the DPC expects
The Data Protection Commission (DPC), Ireland’s supervisory authority under GDPR, has been increasingly active in scrutinising data transfers to the US. While the DPC has not issued specific guidance on voice dictation tools, its enforcement actions and guidance documents make several principles clear:
Data minimisation. Article 5(1)(c) requires that personal data be “adequate, relevant and limited to what is necessary.” If you can achieve the same result (transcription of a dictation) with EU-only processing, using a tool that routes data to the US may be difficult to justify under data minimisation principles.
Transfer impact assessments. Following the Schrems II decision and the subsequent adoption of the EU-US Data Privacy Framework, organisations transferring personal data to the US should still conduct transfer impact assessments. For routine use of dictation tools containing client data, most practices haven’t done this — but the obligation exists.
Accountability. Article 5(2) requires the data controller (the solicitor or their firm) to be able to demonstrate compliance. Using a dictation tool without understanding where the data goes or what protections apply is difficult to reconcile with the accountability principle.
What the Law Society and Bar Council say
The Law Society of Ireland’s guidance on data protection emphasises that solicitors are data controllers for client information they process. This includes information dictated into third-party tools. The solicitor’s firm is responsible for ensuring that any tools used to process client data comply with GDPR, including data transfer requirements.
The Bar of Ireland has issued guidance noting that barristers, as independent practitioners, are individually responsible for their data protection obligations. A barrister who dictates a consultation note into a tool that processes data in the US is the data controller for that transfer and bears the compliance responsibility.
Neither body has issued specific guidance naming dictation tools, but both have made clear that the use of any technology to process client data falls within the practitioner’s data protection obligations.
A practical checklist
For any solicitor or barrister evaluating dictation tools, here are the questions to ask:
-
Where is the audio processed? Not where the company is based — where the actual transcription processing happens. Ask for specifics: which data centres, which jurisdiction.
-
Where is the transcript stored? Even if processing happens in the EU, the resulting transcript may be stored elsewhere or replicated to non-EU infrastructure.
-
Is there a data processing agreement (DPA)? Under GDPR Article 28, any processor handling personal data on your behalf must have a DPA in place. Consumer tools (like built-in phone dictation) typically don’t offer this.
-
What is the data retention period? How long does the tool keep your audio and transcripts? Is it deleted after processing, or retained for model training?
-
Is audio used for model training? Some tools retain audio data to improve their AI models. If your client consultation is being used to train a commercial AI model, that raises both GDPR and privilege concerns.
-
Can you demonstrate compliance? If the DPC or a client asked you to explain where their data goes when you dictate, could you answer clearly?
The EU-only option
The simplest way to address GDPR concerns with voice dictation is to use a tool that processes all data within the EU, with no transfers to the US or other non-EU jurisdictions.
dictate& was built with this as a foundational requirement. All audio processing and transcript storage happens within the EU. No data is transferred to the US. The privacy policy sets this out explicitly, and the technical architecture enforces it — not as a configuration option, but as a hard constraint.
This doesn’t make GDPR compliance automatic. Practitioners still need to consider data minimisation, retention, security, and their obligations as data controllers. But it removes the most significant compliance risk: the international transfer of client data to a jurisdiction with weaker protections.
What to do now
If you’re currently using a dictation tool — whether it’s Dragon, phone dictation, or something else — it’s worth taking ten minutes to check where your data goes. Review the tool’s privacy policy. Look for specifics about data processing location. If the answer is “US” or “global” or vague, consider whether that’s consistent with your obligations.
You don’t need to stop dictating. Voice dictation is a valuable tool for solicitors and barristers — it saves time, improves documentation quality, and lets you capture details while they’re fresh. But the tool you choose should be one you can defend if asked.
In an era when the DPC is actively enforcing GDPR and clients are increasingly aware of data protection, using an EU-only dictation tool isn’t just cautious — it’s professional.