iOS音频—AVAudioEngine

AVAudioEngine的说明:

A group of connected audio node objects used to generate and process audio signals and perform audio input and output.

一组相连接的audio节点,来生成和处理音频信号,执行audio的输入和输出

分别创建audio node,然后把它附加到audio engine。在运行时,可以在audio node上执行所有的操作,如connecting、disconnecting和removing,仅有如下的一些限制:

  • Reconnect audio nodes only when they’re upstream of a mixer.
  • If you remove an audio node that has differing input and output channel counts, or that is a mixer, the result is likely to be a broken graph.

原理说明:

  • Building Modern Audio Apps with AVAudioEngine - 一些说明,可细看
  • iOS AVAudioEngine
  • AVAudioEngine Tutorial for iOS: Getting Started


AVAudioEngineAVFoundation的一部分

AVAudioEngine的作用:

  • Manages graphs of audio nodes - 管理audio节点
  • Connects audio node into active chains - 链接audio节点
  • Dynamically attach and reconfigure graph
  • Start and stop the engine

AVAudioNode说明:

  • Nodes are audio blocks 节点是音频块

    • Source Nodes:Player, Microphone
    • Processing Nodes: Mixer, Audio Unit Effect
    • Destination: Speaker, Headphones Nodes
  • AVAudioEngine提供了3个隐式的node:

    • AVAudioInputNode - 系统输入,不能被创建
    • AVAudioOutputNode - 系统输出,不能被创建
    • AVAudioMixerNode - 混淆多个输入到一个输出
  • node通过它们的输入和输出总线(bus)连接

    • 大多数的node只有一个输入和一个输出

      • AVAudioMixerNode有多个输入和一个输出
    • bus有一个关联的audio format

Node连接

基本使用

大致的流程:

  1. Create the engine
  2. Create the nodes
  3. Attach the nodes to the engine
  4. Connect the nodes together
  5. Start the engine

设置engine

    // 1. Create engine (example only, needs to be strong reference)AVAudioEngine *engine = [[AVAudioEngine alloc] init];// 2. Create a player nodeAVAudioPlayerNode *player = [[AVAudioPlayerNode alloc] init];// 3. Attach node to the engine[engine attachNode:player];// 4. Connect player node to engine's main mixerAVAudioMixerNode *mixer = engine.mainMixerNode;[engine connect:player to:mixer format:[mixer outputFormatForBus:0]];// 5. Start engineNSError *error;if (![engine startAndReturnError:&error]) {// handle error}

播放audio

Audio Files音频文件

AVAudioFile表示的是可被读写的音频文件

Regardless of the file’s actual format, you read and write it using AVAudioPCMBuffer objects that contain samples using AVAudioCommonFormat. This format is referred to as the file’s processing format. Conversions are performed to and from the file’s actual format.
不论文件的实际格式,可以使用包含AVAudioCommonFormat样本AVAudioPCMBuffer对象来读写
Reads and writes are always sequential, but random access is possible by setting the framePosition property.

  • Reads and writes files in all Core Audio supported formats
  • Automatically decodes when reading, encodes when writing 读的时候自动解码,写的时候自动编码
    • Does not support sample rate conversion 不支持采样率转换
  • File has both a file format and a processing format 文件有文件格式和处理根式
    • fileFormat: on-disk format 磁盘上的格式
    • processingFormat: uncompressed, in-memory format 未压缩的,内存中的格式
    • Both are instances of AVAudioFormat 都是AVAudioFormat 实例

Audio Formats音频格式

AVAudioFormat表示的是:

A class that represents a buffer of audio data with a format.
Instances of this class are immutable.

This class wraps a Core Audio AudioStreamBasicDescription structure, with convenience initializers and accessors for common formats, including Core Audio’s standard deinterleaved, 32-bit floating-point format.
表示的是audio数据buffer的格式
这个类的实例不可变
这个类包裹的是Core Audio的AudioStreamBasicDescription 结构体

  • Provides a format descriptor for the digital audio samples

    • Provides access to sample rate, channel count, interleaving, etc.
    • Wrapper over Core Audio AudioStreamBasicDescription
  • Core Audio uses a “Standard” format for both platforms
    • Noninterleaved linear PCM, 32-bit floating point samples
    • Canonical formats are deprecated!
  • Additionally supports “Common” formats
    • AVAudioCommonFormat: 16/32-bit integer, 32/64-but floating point

Audio Buffers

AVAudioPCMBuffer

  • Memory buffer for audio data in any Linear PCM format

    • Format and buffer capacity defined upon creation
  • Provides a wrapper over a Core Audio AudioBufferList
    • audioBufferList and mutableAudioBufferList properties
  • Sample data accessed using:
     @property (nonatomic, readonly) float * const *floatChannelData;@property (nonatomic, readonly) int16_t * const *int16ChannelData;@property (nonatomic, readonly) int32_t * const *int32ChannelData;

Player Nodes播放节点

AVAudioPlayerNode

  • Pushes audio data onto the active render thread
  • Schedule audio data from files and buffers 从文件和缓冲区安排音频数据
    • Scheduled to play immediately or at future time Future times 立即播放or在未来的某个时刻播放

      • Future times specified with AVAudioTime 使用AVAudioTime指定将来的时间
    • Files
      • Schedule file or file segment with completion callback
    • Buffers
      • Schedule multiple buffers with individual completion callbacks
      • Schedule looping buffer
Scheduling Files and Buffers

Immediate File Playback 立即文件播放

[playerNode scheduleFile:audioFile atTime:nil completionHandler:nil];
[playerNode play];

Immediate Buffer Playback 立即缓冲区播放

[playerNode scheduleBuffer:audioBuffer completionHandler:nil];
[playerNode play];

Future Buffer Playback

// Play audio file 5 seconds from now
double sampleRate = buffer.format.sampleRate;
double sampleTime = sampleRate * 5.0;
AVAudioTime *futureTime = [AVAudioTime timeWithSampleTime:sampleTimeatRate:sampleRate];
[playerNode scheduleBuffer:audioBuffer atTime:futureTime options:0 completionHandler:nil];
[playerNode play];

Creating Files and Buffers

    NSURL *url = [[NSBundle mainBundle] URLForResource:@"groove" withExtension:@"m4a"];// Create AVAudioFileAVAudioFile *file = [[AVAudioFile alloc] initForReading:url error:nil];// 创建AVAudioPCMBufferAVAudioFormat *format = file.processingFormat;AVAudioFrameCount capacity = (AVAudioFrameCount)file.length;AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:format frameCapacity:capacity];// Read AVAudioFile -> AVAudioPCMBuffer[file readIntoBuffer:buffer error:nil];

一些说明

1.AVAudioNode

AVAudioNode,一个抽象类,用于音频生成,处理或I/O块。AVAudioEngine对象包含各种AVAudioNode子类的实例。
Node有input和output总线(bus),可将其视为连接点。 例如,effect通常具有一个输入总线和一个输出总线。 mixer通常具有多个输入总线和一个输出总线。
总线具有格式,以采样率(sample rate)和通道数(channel count)表示。在节点之间建立连接时,格式通常必须完全匹配。 但是,有一些例外,例如AVAudioMixerNode和AVAudioOutputNode类。
一个audio node只有连接到audio engine才有用
子类有:

  • AVAudioMixerNode - A node with an output volume; it mixes its inputs down to a single output. The AVAudioEngine’s built-in mixerNode is an AVAudioMixerNode.
    AVAudioEngine内置的mixerNodeAVAudioMixerNode
  • AVAudioIONode - A node that patches through to the system’s (device’s) own input (AVAudioInputNode) or output (AVAudioOutputNode). The AVAudioEngine’s built-in inputNode and outputNode are AVAudioIONodes.
  • AVAudioPlayerNode - 可以从file或者buffer播放
  • AVAudioEnvironmentNode
  • AVAudioUnit - A node that processes its input with special effects before passing it to the output. Built-in subclasses include:
    • AVAudioUnitTimePitch
    • AVAudioUnitVarispeed
    • AVAudioUnitDelay - 控制播放速率
    • AVAudioUnitDistortion
    • AVAudioUnitEQ - 均衡效果器
    • AVAudioUnitReverb - 混响器

2.什么是Audio Mixing

混音(英语:Audio Mixing)是音乐制作中的一个步骤,是把多种来源的声音,整合至一个立体音轨(Stereo)或单音音轨(Mono)中。这些原始声音信号,来源可能分别来自不同的乐器、人声或管弦乐,收录自现场演奏(live)或录音室内。在混音的过程中,混音师会将每一个别原始信号的频率、动态、音质、定位、残响和声场单独进行调整,让各音轨最佳化,之后再叠加于最终成品上。这种处理方式,能制作出一般听众在现场录音时不能听到之层次分明的完美效果。
全景(Panorama)
Panorama通常缩写为Pan,在Stereo的状态下,让声音在左、右声道中调整位置,以达到把音轨错开、增加清晰度、避免乐器互相干扰等目的。正确使用Pan的功能,可以让声音更宽大、音场更深;当然,就像混音中其他元素一样,Pan的调整方式并没有绝对的对错,要视实际情况来判断

— 混音

Demo

例子来源自Programming iOS 12 和 网络

1.播放本地的一个音频文件

    AVAudioPlayerNode *player = [[AVAudioPlayerNode alloc] init];NSURL *url = [[NSBundle mainBundle] URLForResource:@"aboutTiagol" withExtension:@"m4a"];NSError *error;AVAudioFile *file = [[AVAudioFile alloc] initForReading:url error:&error];if (error) {NSLog(@"%@", [error localizedDescription]);return;}[self.audioEngine attachNode:player];[self.audioEngine connect:player to:self.audioEngine.mainMixerNode format:file.processingFormat];[player scheduleFile:file atTime:nil completionHandler:^{dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{if (self.audioEngine.isRunning) {[self.audioEngine stop];}});}];[self.audioEngine prepare];[self.audioEngine startAndReturnError:&error];if (error) {NSLog(@"%@", [error localizedDescription]);}[player play];

2.buffer播放的例子

    NSURL *url = [[NSBundle mainBundle] URLForResource:@"Hooded" withExtension:@"mp3"];NSError *error;AVAudioFile *file = [[AVAudioFile alloc] initForReading:url error:&error];if (error) {NSLog(@"%@", [error localizedDescription]);return;}AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormatframeCapacity:(AVAudioFrameCount)(file.length)];[file readIntoBuffer:buffer error:&error];if (error) {NSLog(@"%@", [error localizedDescription]);return;}AVAudioPlayerNode *player = [[AVAudioPlayerNode alloc] init];[self.audioEngine attachNode:player];[self.audioEngine connect:player to:self.audioEngine.mainMixerNode format:file.processingFormat];[player scheduleFile:file atTime:nil completionHandler:^{dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{if (self.audioEngine.isRunning) {[self.audioEngine stop];}});}];[self.audioEngine prepare];[self.audioEngine startAndReturnError:&error];if (error) {NSLog(@"%@", [error localizedDescription]);}[player play];

3.添加更多的node,这个例子同时播放2个声音,第一个声音通过file,第二个声音通过buffer(循环播放这个声音)。
第一个声音传递给time-pitch effect node,然后再通过reverb effect node(混响效果节点)

Time Pitch 变速变调效果器,调整声音音高 eg:会说话的Tom猫

    AVAudioPlayerNode *player = [[AVAudioPlayerNode alloc] init];NSURL *url = [[NSBundle mainBundle] URLForResource:@"aboutTiagol" withExtension:@"m4a"];NSError *error;AVAudioFile *file = [[AVAudioFile alloc] initForReading:url error:&error];if (error) {NSLog(@"%@", [error localizedDescription]);return;}[self.audioEngine attachNode:player];//effect nodeAVAudioUnitTimePitch *effect = [[AVAudioUnitTimePitch alloc] init];effect.rate = 0.9;effect.pitch = -300;[self.audioEngine attachNode:effect];[self.audioEngine connect:player to:effect format:file.processingFormat];//effect nodeAVAudioUnitReverb *effect2 = [[AVAudioUnitReverb alloc] init];[effect2 loadFactoryPreset:AVAudioUnitReverbPresetCathedral];effect2.wetDryMix = 40;[self.audioEngine attachNode:effect2];[self.audioEngine connect:effect to:effect2 format:file.processingFormat];//patch last node into engine mixer and start playing first sound//添加mixer,开始播放第一个声音AVAudioMixerNode *mixer = self.audioEngine.mainMixerNode;[self.audioEngine connect:effect2 to:mixer format:file.processingFormat];[player scheduleFile:file atTime:nil completionHandler:^{dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.1 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{if (self.audioEngine.isRunning) {[self.audioEngine stop];}});}];[self.audioEngine prepare];[self.audioEngine startAndReturnError:&error];if (error) {NSLog(@"%@", [error localizedDescription]);}[player play];//第二个声音 循环播放NSURL *url2 = [[NSBundle mainBundle] URLForResource:@"Hooded" withExtension:@"mp3"];AVAudioFile *file2 = [[AVAudioFile alloc] initForReading:url2 error:&error];if (error) {NSLog(@"%@", [error localizedDescription]);return;}AVAudioPCMBuffer *buffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file2.processingFormatframeCapacity:(AVAudioFrameCount)file2.length];[file2 readIntoBuffer:buffer error:&error];if (error) {NSLog(@"%@", [error localizedDescription]);return;}AVAudioPlayerNode *player2 = [[AVAudioPlayerNode alloc] init];[self.audioEngine attachNode:player2];[self.audioEngine connect:player2 to:mixer format:file2.processingFormat];[player2 scheduleBuffer:buffer atTime:nil options:AVAudioPlayerNodeBufferLoops completionHandler:nil];// mix down a little, start playing second sound开始播放第二个声音player.pan = -0.5;player2.volume = 0.5;player2.pan = 0.5;[player2 play];

4.将声音文件传递给reverb effect(混响器),将输出保存为一个新的文件

    NSURL *url = [[NSBundle mainBundle] URLForResource:@"Hooded" withExtension:@"mp3"];NSError *error;AVAudioFile *file = [[AVAudioFile alloc] initForReading:url error:&error];if (error) {NSLog(@"AVAudioFile error: %@", [error localizedDescription]);return;}AVAudioPlayerNode *player = [[AVAudioPlayerNode alloc] init];[self.audioEngine attachNode:player];// patch the player into the effectAVAudioUnitReverb *effect = [[AVAudioUnitReverb alloc] init];[effect loadFactoryPreset:AVAudioUnitReverbPresetCathedral];effect.wetDryMix = 40;[self.audioEngine attachNode:effect];[self.audioEngine connect:player to:effect format:file.processingFormat];AVAudioMixerNode *mixer = self.audioEngine.mainMixerNode;[self.audioEngine connect:effect to:mixer format:file.processingFormat];//创建输出文件NSFileManager *fm = [NSFileManager defaultManager];NSURL *doc = [fm URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:YES error:nil];NSURL *outurl = [doc URLByAppendingPathComponent:@"myfile.aac" isDirectory:NO];[fm removeItemAtURL:outurl error:nil];AVAudioFile *outfile = [[AVAudioFile alloc] initForWriting:outurlsettings:@{AVFormatIDKey : [NSNumber numberWithInt:kAudioFormatMPEG4AAC],AVNumberOfChannelsKey : [NSNumber numberWithInt: 1],AVSampleRateKey : [NSNumber numberWithFloat: 22050]}error:nil];NSLog(@"outurl %@", outurl);BOOL done = false;[player scheduleFile:file atTime:nil completionHandler:nil];uint32_t sz = 4096;[self.audioEngine enableManualRenderingMode:AVAudioEngineManualRenderingModeOfflineformat:file.processingFormatmaximumFrameCount:(AVAudioFrameCount)szerror:nil];[self.audioEngine prepare];[self.audioEngine startAndReturnError:&error];[player play];AVAudioPCMBuffer *outbuf = [[AVAudioPCMBuffer alloc] initWithPCMFormat:file.processingFormatframeCapacity:(AVAudioFrameCount)sz];AVAudioFramePosition rest = file.length - self.audioEngine.manualRenderingSampleTime;while (rest > 0) {AVAudioFrameCount ct = MIN(outbuf.frameCapacity, (AVAudioFrameCount)rest);AVAudioEngineManualRenderingStatus stat = [self.audioEngine renderOffline:ct toBuffer:outbuf error:nil];if (stat == AVAudioEngineManualRenderingStatusSuccess) {[outfile writeFromBuffer:outbuf error:nil];rest = file.length - self.audioEngine.manualRenderingSampleTime;}}[player stop];[self.audioEngine stop];[self play:outurl];

5.录音并写入到本地文件

参考:

  • Recording audio file using AVAudioEngine
  • AVAudioEngine完成即時錄音與播放功能

录音的同时,在耳麦中播放,类似耳返,并写入到本地的文件

参考上面的教程,demo如下:

//录音
- (IBAction)button5_action:(id)sender {NSFileManager *fm = [NSFileManager defaultManager];NSURL *doc = [fm URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:YES error:nil];NSURL *outurl = [doc URLByAppendingPathComponent:@"record.caf" isDirectory:NO];[fm removeItemAtURL:outurl error:nil];AVAudioFile *file = [[AVAudioFile alloc] initForWriting:outurlsettings:[self.audioEngine.mainMixerNode outputFormatForBus:0].settingserror:nil];AVAudioInputNode *inputNode = self.audioEngine.inputNode;[self.audioEngine connect:inputNodeto:self.audioEngine.mainMixerNodeformat:[self.audioEngine.inputNode inputFormatForBus:0]];[self.audioEngine.mainMixerNode installTapOnBus:0 bufferSize:1024 format:[self.audioEngine.mainMixerNode outputFormatForBus:0] block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {[file writeFromBuffer:buffer error:nil];}];[self.audioEngine prepare];[self.audioEngine startAndReturnError:nil];}
//停止录音
- (IBAction)stopAction:(id)sender {[self.audioEngine.mainMixerNode removeTapOnBus:0];[self.audioEngine stop];NSFileManager *fm = [NSFileManager defaultManager];NSURL *doc = [fm URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:YES error:nil];NSURL *outurl = [doc URLByAppendingPathComponent:@"record.caf" isDirectory:NO];NSLog(@"outurl %@", outurl);[self play:outurl];}

本身初始化時AVAudioEngine已經將mainMixerNode與輸出端連接,所以我們要將輸入節點與它連接後就能完成即時錄音與播放功能,接續之前的程式後增加輸入節點與mainMixerNode的連接,

iOS音频---AVAudioEngine相关推荐

  1. 一步一步教你实现iOS音频频谱动画(一)

    如果你想先看看最终效果再决定看不看文章 -> bilibili 示例代码下载 第二篇:一步一步教你实现iOS音频频谱动画(二) 基于篇幅考虑,本次教程分为两篇文章,本篇文章主要讲述音频播放和频谱 ...

  2. iOS音频播放 (八):NowPlayingCenter和RemoteControl

    转自 码农人生 ChengYin's coding life http://msching.github.io/blog/2014/11/06/audio-in-ios-8/ iOS音频播放 (八): ...

  3. iOS音频播放(一):概述

    (本文转自码农人生) 前言 从事音乐相关的app开发也已经有一段时日了,在这过程中app的播放器几经修改,我也因此对于iOS下的音频播放实现有了一定的研究.写这个 系列的博客目的一方面希望能够抛砖引玉 ...

  4. iOS音频播放 (二):AudioSession 转

    原文出处 :http://msching.github.io/blog/2014/07/08/audio-in-ios-2/ 前言 本篇为<iOS音频播放>系列的第二篇. 在实施前一篇中所 ...

  5. iOS音频的后台播放总结(后台网络请求歌曲,Remote控制,锁屏封面,各种打断)...

    iOS音频的后台播放总结(后台网络请求歌曲,Remote控制,锁屏封面,各种打断) 2013-12-11 21:13 1416人阅读 评论(0) 收藏 举报  分类: cocoa SDK(139)  ...

  6. iOS 音频视频图像合成那点事

    代码地址如下: http://www.demodashi.com/demo/13420.html 人而无信不知其可 前言 很久很久没有写点什么了,只因为最近事情太多了,这几天终于闲下来了,趁此机会,记 ...

  7. iOS音频播放 (一):概述 转

    2019独角兽企业重金招聘Python工程师标准>>> 基础 先来简单了解一下一些基础的音频知识. 目前我们在计算机上进行音频播放都需要依赖于音频文件,音频文件的生成过程是将声音信息 ...

  8. 苹果ios音频的回声消除处理

    iOS设备上回声消除的例子 工业上的声音处理中,回声消除是一个重要的话题,重要性不亚于噪声消除.人声放大.自动增益等,尤其是在VoIP功能上,回声消除是每一个做VoIP功能团队的必修课.QQ.Skyp ...

  9. 视频直播APP源码开发iOS音频播放流程

    视频直播APP源码开发iOS音频播放流程 概览 随着移动互联网的发展,如今的手机早已不是打电话.发短信那么简单了,播放音乐.视频.录音.拍照等都是很常用的功能.在iOS中对于多媒体的支持是非常强大的, ...

最新文章

  1. iOS多线程编程之多线程简单介绍(转载)
  2. [jillzhang]ExtJs与WCF交互:生成树 --数据库版补充
  3. Java 如何线程间通信,面试被问哭。。。
  4. C++中各种智能指针的实现及弊端(三)
  5. SQLite轻量级数据库,操作数据常用语句
  6. 第一章 计算机网络 6 OSI参考模型 [计算机网络笔记]
  7. Accept-Encoding
  8. php 基础知识 常见面试题
  9. idea一直在 downloading plugins for ... 失败
  10. TML5 App 开发框架收集
  11. 源码装置vsftpd
  12. 计算机三级er图怎么画,visio2013怎么画ER图?
  13. 终端一直显示 (master) ,即终端一直处于master分支下的取消办法
  14. 计算机屏幕截图按什么键,电脑按什么键自由截图
  15. MySQL性能优化(三)Buffer Pool实现原理
  16. 优势比(Odds Ratios)
  17. 【物流篇】数商云物流供应链解决方案
  18. 技巧1——怎样查看linux发行版本名称和版本号?
  19. 基于STM32F103单片机的智能扫地机器人 循迹避障车 原理图PCB设计
  20. 【OpenGL】查看显卡对OpenGL的支持程度

热门文章

  1. Django or Python代码加密
  2. Qt QCustomPlot设置隐藏网格
  3. 2022 RoboCom 世界机器人开发者大赛-本科组(国赛)
  4. 驼峰设计 PPT美化
  5. 文本框限制长度并且右下角显示字数
  6. 基于Android的校园新闻APP
  7. 如何理解同步阻塞、同步非阻塞、异步阻塞、异步非阻塞
  8. 数据库导出的csv文件纯数字被转为科学计数法
  9. 数据库概论(王珊 萨师煊著)第5版 预习笔记
  10. Logic Pro X 10.5.1 macOS 苹果音乐制作宿主软件下载