ios – swift – how one can get audio output from mic on iphone at 8khz pattern fee with buffer measurement of 16ms & 1 channel?

Spread the love

For a demo app on iphone, I need to get audio output from mic at 8khz, buffer measurement of 16ms in order that it provides 128bytes of framelength at at time.

I’ve tried this AVAudioEngine by putting in a faucet on the enter node however there isn’t a approach of setting pattern fee and buffer measurement. Any strategies on how one can obtain 128 bytes output?


personal func startRecording() throws {
    if !self.hasPermissionToRecord {
    if engine == nil {
        engine = AVAudioEngine()
    let inputNode = engine.inputNode
    let microphoneOutputFormat: AVAudioFormat = inputNode.outputFormat(forBus: 0)
    let microphoneAudioStream = microphoneOutputFormat.formatDescription.audioStreamBasicDescription
    let microphoneSampleRate = microphoneAudioStream!.mSampleRate

    var mulawDescription = AudioStreamBasicDescription(
        mSampleRate: 8000,
        mFormatID: kAudioFormatULaw,
        mFormatFlags: 0,
        mBytesPerPacket: 1,
        mFramesPerPacket: 1,
        mBytesPerFrame: 1,
        mChannelsPerFrame: 1,
        mBitsPerChannel: 8,
        mReserved: 0
    let mulawFormat = AVAudioFormat(streamDescription: &mulawDescription)!
    converter = AVAudioConverter(from: microphoneOutputFormat, to: mulawFormat)
    inputNode.installTap(onBus: 0, bufferSize: defaultUlawBufferSize, format: microphoneOutputFormat) { [self] (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) in
    let secondsInBuffer:Float64 = Float64(buffer.frameLength) / Float64(microphoneSampleRate);
    let ulawBufferSize = UInt32(8000 * secondsInBuffer) // 800
    // conversion - downsample
    guard let mulawBuffer = AVAudioPCMBuffer(pcmFormat: converter.outputFormat, frameCapacity: 800) else {
    var outError: NSError? = nil
    let microphoneInputBlock: AVAudioConverterInputBlock = { (inNumbPackets, outStatus) -> AVAudioBuffer? in
        outStatus.pointee = AVAudioConverterInputStatus.haveData
        return buffer
    self.converter.convert(to: mulawBuffer, error: &outError, withInputFrom: microphoneInputBlock)
    let mulawData = Knowledge(buffer:mulawBuffer)
    if (!(receivedAudioCallback?(mulawData) ?? false)) {
    engine.put together()
    strive engine.begin()

I’ve tried this code, after downsampling at 8khz it provides me 800bytes

Leave a Reply

Your email address will not be published. Required fields are marked *