Supported Audio Formats
The following audio containers and their associated codecs are supported by the Corti API:| Container | Supported Encodings | Comments |
|---|---|---|
| Ogg | Opus, Vorbis | Excellent quality at low bandwidth |
| WebM | Opus, Vorbis | Excellent quality at low bandwidth |
| MP4/M4A | AAC, MP3 | Compression may degrade transcription quality |
| MP3 | MP3 | Compression may degrade transcription quality |
In addition to the formats defined above, WAV files are supported for upload to the
/recordings endpoint.We recommend a sample rate of 16 kHz to capture the full range of human speech frequencies, with higher rates offering negligible recognition benefit but increasing computational cost.Raw audio streams are not supported at this time.Microphone Configuration
Dictation
| Setting | Recommendation | Rationale |
|---|---|---|
| echoCancellation | Off | Ensure clear, unfiltered audio from near-field recording. |
| autoGainControl | Off | Manual calibration of microphone gain level provides optimal support for consistent dictation patterns (i.e., microphone placement and speaking pattern). Recalibrate when dictation environments change (e.g., moving from a quiet to noisy environment). Recommend setting input gain with average loudness around –12 dBFS RMS (peaks near –3 dBFS) to prevent audio clipping. |
| noiseSuppression | Mild (-15dB) | Removes background noise (e.g., HVAC); adjust as needed to optimize for your environment. |
Doctor/Patient Conversation
| Setting | Recommendation | Rationale |
|---|---|---|
| echoCancellation | On | Suppresses “echo” audio that is being played by your device speaker, e.g. remote call participant’s voice + system alert sounds. |
| autoGainControl | On | Adaptive correction of input gain to support varying loudness and speaking patterns of conversational audio. |
| noiseSuppression | Mild (-15dB) | Removes background noise (e.g., HVAC); adjust as needed to optimize for your environment. |
Maintain average loudness around –12 dBFS RMS with peaks near –3 dBFS for optimal ASR normalization.
Recording Best Practices
Environment
| Factor | Recommendation | Rationale |
|---|---|---|
| Ambient noise | Keep background noise below 40 dBA (quiet office). | Prevent unintentional audio from being picked up by speech recognition. |
| Reverberation | Use rooms with carpeted, non-reflective surfaces when possible. | Reduces audio reverberation that can harm recognition accuracy or diarization performance. |
| Microphone type | Use directional microphones for dictation and beamforming array microphones for conversations. | Focuses on the primary speaker and suppresses background noise. |
| Microphone placement | Keep the microphone near the side of your mouth so you do not breathe directly into it. A distance of 10–20 cm for dictation is ideal, or within 1 m for doctor/patient conversation. | Balances clarity and comfort. |
| Laptop microphones | Avoid when possible. Prefer external USB, desktop, or wearable/ headset mics. | Built-in mics capture keyboard and fan noise. |
Use of Mobile Devices
iPhones and iPads
Modern iOS devices have high-quality MEMS microphone arrays and can deliver professional ASR results if configured correctly:- Use Voice Memos app or any third-party app (like Corti Assistant) that exports uncompressed WAV, FLAC, or Opus
- 16-bit / 16 kHz PCM Mono audio format
- Use the microphone on the bottom of the device as the primary microphone (talk towards where you would speak for a phone call, not the screen or top/side array mic)
- Disable Voice Isolation or Wide Spectrum as these apply aggressive filters that can distort audio quality
- Leave system gain fixed (do not rely on iOS loudness compensation) in order to prevent dynamic gain shifts that disrupt ASR input consistency
- If possible, explore wired or MFi-certified microphones for optimal audio quality capture
Android Devices
Android devices have variable microphone hardware, but most of the guidelines for iPhones listed above can be applied. Prefer external USB or Bluetooth headsets that record 16-bit/16 kHz mono PCM. If use of Android is required, then please contact us for further assistance selecting the best mobile microphone option.Channel Configuration
Choosing the right channel configuration ensures accurate transcription, speaker separation, and diarization across different use cases.| Audio type | Workflow | Rationale |
|---|---|---|
| Mono | Dictation or in-room doctor/patient conversation | Speech recognition models expect a single coherent input source. Using one channel avoids phase cancellation and ensures consistent amplitude. Mono also reduces bandwidth and file size without affecting accuracy. |
| Multichannel (stereo or dual mono) | Telehealth or remote doctor/patient conversations | Assigns each participant to a dedicated channel, allowing the ASR system to perform accurate diarization (speaker attribution). Provides better control over noise suppression and improves transcription accuracy when voices overlap. |
Multichannel configuration
Multichannel configuration
Additional Notes
Additional Notes
- Each channel should capture only one speaker’s microphone feed in order to avoid cross-talk or echo between channels - diarization will be most reliable from multichannel audio and may be inconsistent with mono audio.
- Mono audio will show one channel (-1), whereas dual mono will show two channels (0, 1).
- Keep all channels aligned in time; do not trim or delay audio streams independently.
- Use mono capture per channel (16-bit / 16 kHz PCM) even when using multichannel containers (e.g., stereo WAV, WebM, or Ogg).
Please contact us if you need more information about supported audio formats or are having issues processing an audio file.Additional references and resources: