AVSpeech 的基础知识(AVFoundation)

speech

AVSpeechUtterance 语音话语
包括: 语种(eg:zh-CN)、内容(文字)
@property(nonatomic) NSTimeInterval preUtteranceDelay;    // Default is 0.0   读一段话之前的停顿
@property(nonatomic) NSTimeInterval postUtteranceDelay;   // Default is 0.0  读完一段后的停顿时间

@property(nonatomic) float rate;// Values are pinned between AVSpeechUtteranceMinimumSpeechRate and AVSpeechUtteranceMaximumSpeechRate.
@property(nonatomic) float pitchMultiplier;  // [0.5 - 2] Default = 1 音高
@property(nonatomic) float volume;           // [0-1] Default = 1

初始化
+ (instancetype)speechUtteranceWithString:(NSString *)string;
+ (instancetype)speechUtteranceWithAttributedString:(NSAttributedString *)string API_AVAILABLE(ios(10.0), watchos(3.0), tvos(10.0));
- (instancetype)initWithString:(NSString *)string;
- (instancetype)initWithAttributedString:(NSAttributedString *)string API_AVAILABLE(ios(10.0), watchos(3.0), tvos(10.0));
创建发言对象通过内容


@property(nonatomic, readonly) NSString *speechString;
@property(nonatomic, readonly) NSAttributedString *attributedSpeechString API_AVAILABLE(ios(10.0), watchos(3.0), tvos(10.0));
应该是10 之后-支持了一些东西;

@property(nonatomic, retain, nullable) AVSpeechSynthesisVoice *voice;
声明为retain ,其实应该对应的就是一个位置的结构体类似;

语种的设置
@interface AVSpeechSynthesisVoice : NSObject

+ (NSArray *)speechVoices; // 所有的饿支持的语种
+ (NSString *)currentLanguageCode; // 语种的代码

初始化 (通过语种代码)
+ (nullable AVSpeechSynthesisVoice *)voiceWithLanguage:(nullable NSString *)languageCode;

这个方式也是通过identifier来进行初始化; (应该是可以进行自定义一些内容)
+ (nullable AVSpeechSynthesisVoice *)voiceWithIdentifier:(NSString *)identifier NS_AVAILABLE_IOS(9_0);

读取的属性
@property(nonatomic, readonly) NSString *language;
@property(nonatomic, readonly) NSString *identifier NS_AVAILABLE_IOS(9_0);
@property(nonatomic, readonly) NSString *name NS_AVAILABLE_IOS(9_0);
@property(nonatomic, readonly) AVSpeechSynthesisVoiceQuality quality NS_AVAILABLE_IOS(9_0);

@end

AVSpeechSynthesizer 语音合成器
@property(nonatomic, weak, nullable) id delegate; 代理
@property(nonatomic, readonly, getter=isSpeaking) BOOL speaking;  // 进行
@property(nonatomic, readonly, getter=isPaused) BOOL paused; // 暂停

- (void)speakUtterance:(AVSpeechUtterance *)utterance;
播放语音

停止播放语音的边界,打断当前的语音和清除队列;
/* Call stopSpeakingAtBoundary: to interrupt current speech and clear the queue. */
- (BOOL)stopSpeakingAtBoundary:(AVSpeechBoundary)boundary;
- (BOOL)pauseSpeakingAtBoundary:(AVSpeechBoundary)boundary;
- (BOOL)continueSpeaking;

typedef NS_ENUM(NSInteger, AVSpeechBoundary) {
    AVSpeechBoundaryImmediate, //立刻停止
    AVSpeechBoundaryWord // 读完最后一个字停止
} NS_ENUM_AVAILABLE_IOS(7_0);

@property(nonatomic, retain, nullable) NSArray *outputChannels API_AVAILABLE(ios(10.0), watchos(3.0), tvos(10.0));
指定输出的渠道,这里是audiochannel 在当前的audio 的rote上;复制到指定的渠道里面
常量

个人测试的值是: 最小是0 ,最大是1.0 ,默认是0.5

觉得这里不够清楚,可以看官方文档,这里是对官方文档的一些理解

使用例子请看:
http://www.jianshu.com/p/00dac8e040d5

你可能感兴趣的:(AVSpeech 的基础知识(AVFoundation))