Symbl doesn’t offer speaker diarization over our real-time API. The best practice for separating speakers is to open up a new stream for each. You can join N number of audio streams into the same symbl meeting by providing the same meeting ID in each new WebSocket request to the wss://api.symbl.ai/v1/realtime/insights/MEETING_ID?access_token=ACCESS_TOKEN URL. In the config for each, you can provide speaker information.
Hi. Are there any plans to support speaker detection and diarization from a real-time single stream?
Can you pls confirm if this is possible on a single audio stream async?
Will the final transcript from a live stream be diarized if there is only one audio stream?