一、语音聊天说专业点就是即时语音,是一种基于网络的快速传递语音信息的技术,普遍应用于各类社交软件中,优势主要有以下几点:
(1)时效性:视频直播会因为带宽问题有时出现延迟c#教程高的问题,而语音直播相对来说会好很多,延迟低,并且能够第·一时间与听众互动,时效性强。
(2)隐私性:这一点体现在何处,如主播不想暴露自己的长相,或者进行问题回答是,没有视频的话会让主播感到更安心,所以语音直播隐私性更强。
(3)内容质量高:因为语音直播不靠“颜值”只有好的内容才能够吸引用户,所以语音直播相对来说内容质量更高。
(4)成本降低:语音直播相对视频直播来说,带宽流量等都会便宜许多,成本降低不少,更加实惠。
二、语音聊天主要步骤:音频采集、压缩编码、网络传输、解码还原、播放音频,如下图所示
下面就从代码的角度来详说一下这几个步骤。
(1)音频采集,读取麦克风设备数据
1 private readonly WaveIn _waveIn;
2 _waveIn = new WaveIn();
3 _waveIn.BufferMilliseconds = 50;
4 _waveIn.DeviceNumber = 0;
5 _waveIn.DataAvailable += OnAudioCaptured;
6 _waveIn.StartRecording();
(2)音频数据压缩编码,常见压缩格式比较多,例如mp3、acc、speex等,这里以speex为例
1 private readonly WideBandSpeexCodec _speexCodec;
2 _speexCodec = new WideBandSpeexCodec();
3 _waveIn.WaveFormat = _speexCodec.RecordFormat;
4
5 void OnAudioCaptured(object sender, WaveInEventArgs e)
6 {
7 byte[] encoded = _speexCodec.Encode(e.Buffer, 0, e.BytesRecorded);
8 _audioClient.Send(encoded);
9 }
(3)网络传输,为了保证即时传输udp协议有着天然的优点
1 using SAEA.Sockets;
2 using SAEA.Sockets.Base;
3 using SAEA.Sockets.Model;
4 using System;
5 using System.Net;
6
7 namespace GFF.Component.GAudio.Net
8 {
9 public class AudioClient
10 {
11 IClientSocket _udpClient;
12
13 BaseUnpacker _baseUnpacker;
14
15 public event Action<Byte[]> OnReceive;
16
17 public AudioClient(IPEndPoint endPoint)
18 {
19 var bContext = new BaseContext();
20
21 _udpClient = SocketFactory.CreateClientSocket(SocketOptionBuilder.Instance.SetSocket(SAEASocketType.Udp)
22 .SetIPEndPoint(endPoint)
23 .UseIocp(bContext)
24 .SetReadBufferSize(SocketOption.UDPMaxLength)
25 .SetWriteBufferSize(SocketOption.UDPMaxLength)
26 .Build());
27
28 _baseUnpacker = (BaseUnpacker)bContext.Unpacker;
29
30 _udpClient.OnReceive += _udpClient_OnReceive;
31 }
32
33 private void _udpClient_OnReceive(byte[] data)
34 {
35 OnReceive?.Invoke(data);
36 }
37
38 public void Connect()
39 {
40 _udpClient.Connect();
41 }
42
43 public void Send(byte[] data)
44 {
45 _udpClient.SendAsync(data);
46 }
47
48 public void Disconnect()
49 {
50 _udpClient.Disconnect();
51 }
52
53 }
54 }
(4)服务器转发,客户端使用udp,服务器这里同样也使用udp来转发
1 using SAEA.Sockets;
2 using SAEA.Sockets.Base;
3 using SAEA.Sockets.Interface;
4 using SAEA.Sockets.Model;
5 using System;
6 using System.Collections.Concurrent;
7 using System.Net;
8 using System.Threading.Tasks;
9
10 namespace GFF.Component.GAudio.Net
11 {
12 public class AudioServer
13 {
14 IServerSocket _udpServer;
15
16 ConcurrentDictionary<string, IUserToken> _cache;
17
18 public AudioServer(IPEndPoint endPoint)
19 {
20 _cache = new ConcurrentDictionary<string, IUserToken>();
21
22 _udpServer = SocketFactory.CreateServerSocket(SocketOptionBuilder.Instance.SetSocket(SAEASocketType.Udp)
23 .SetIPEndPoint(endPoint)
24 .UseIocp<BaseContext>()
25 .SetReadBufferSize(SocketOption.UDPMaxLength)
26 .SetWriteBufferSize(SocketOption.UDPMaxLength)
27 .SetTimeOut(5000)
28 .Build());
29 _udpServer.OnAccepted += _udpServer_OnAccepted;
30 _udpServer.OnDisconnected += _udpServer_OnDisconnected;
31 _udpServer.OnReceive += _udpServer_OnReceive;
32 }
33
34 public void Start()
35 {
36 _udpServer.Start();
37 }
38
39 public void Stop()
40 {
41 _udpServer.Stop();
42 }
43
44 private void _udpServer_OnReceive(ISession currentSession, byte[] data)
45 {
46 Parallel.ForEach(_cache.Keys, (id) =>
47 {
48 try
49 {
50 _udpServer.SendAsync(id, data);
51 }
52 catch { }
53 });
54 }
55
56 private void _udpServer_OnAccepted(object obj)
57 {
58 var ut = (IUserToken)obj;
59 if (ut != null)
60 {
61 _cache.TryAdd(ut.ID, ut);
62 }
63 }
64
65 private void _udpServer_OnDisconnected(string ID, Exception ex)
66 {
67 _cache.TryRemove(ID, out IUserToken _);
68 }
69 }
70 }
(5)解码还原,客户端将从服务器收到的数据按约定的压缩格式,进行解压缩还原成音频数据
1 private readonly BufferedWaveProvider _waveProvider;
2 _waveProvider = new BufferedWaveProvider(_speexCodec.RecordFormat);
3
4 private void _audioClient_OnReceive(byte[] data)
5 {
6 byte[] decoded = _speexCodec.Decode(data, 0, data.Length);
7 _waveProvider.AddSamples(decoded, 0, decoded.Length);
8 }
(6)播放音频,使用播放设备来播放解码后的音频数据
1 private readonly IWavePlayer _waveOut;
2 _waveOut = new WaveOut();
3 _waveOut.Init(_waveProvider);
4 _waveOut.Play();
三、测试运行,通过分析语音聊天的几个关键问题点后,按步骤封装好代码,接下来就是用实例来测试一下效果了。
客户端封装在按钮事件中:
1 GAudioClient _gAudioClient = null;
2
3 private void toolStripDropDownButton2_ButtonClick(object sender, EventArgs e)
4 {
5 if (_gAudioClient == null)
6 {
7 ClientConfig clientConfig = ClientConfig.Instance();
8 _gAudioClient = new GAudioClient(clientConfig.IP, clientConfig.Port + 2);
9 _gAudioClient.Start();
10 }
11 else
12 {
13 _gAudioClient.Dispose();
14 _gAudioClient = null;
15 }
16 }
服务端封装在main函数中:
1 ConsoleHelper.WriteLine("正在初始化语音服务器...", ConsoleColor.DarkBlue);
2 _gAudioServer = new GAudioServer(filePort + 1);
3 ConsoleHelper.WriteLine("语音服务器初始化完毕...", ConsoleColor.DarkBlue);
4 ConsoleHelper.WriteLine("正在启动语音服务器...", ConsoleColor.DarkBlue);
5 _gAudioServer.Start();
6 ConsoleHelper.WriteLine("语音服务器初始化完毕", ConsoleColor.DarkBlue);
万事俱备,现在F5跑起来试试。
如上红框所示,喊了几句相当于Hello World的Hello没有问题,大功初步告成~
转载请标明本文来源:https://www.cnblogs.com/yswenli/p/14353482.html
更多内容欢迎我的的github:https://github.com/yswenli/GFF
如果发现本文有什么问题和任何建议,也随时欢迎交流~
感谢您的阅读,如果您对我的博客所讲述的内容有兴趣,请继续关注我的后续博客,我是yswenli 。