最近一直找游戏设计灵感。
找着找着,突然想起,以前玩过的一款打飞机游戏,端游,他可以根据你播放的音乐文件来自动发射子弹,玩家不是能自主发射子弹的,当时我就一直拿那个:《劲乐团》里的一首曲子:《V3》- 贝多芬交响曲。
这首曲子,全程高能,整个屏幕都是子弹,爽啊,哈哈,而且这首曲子也很嗨。
当然与音乐相关的游戏还不止这款。。。
相关游戏:
等等,还有现在新出的各式各样的游戏。基本都会使用到先音频分析自动生成数据,再人工调整。(当然,我只是猜测,肯定有团队是全人工编辑的)
为了找灵感,再尝试试API来制作一些功能,再未来开发,也许有帮助。
OK,扯题的话就不说了。
public static string[] devices { get; }
public static AudioClip Start(string deviceName, bool loop, int lengthSec, int frequency);
public static void End(string deviceName);
public static int GetPosition(string deviceName);
public static bool IsRecording(string deviceName);
OK,就这么几个是主要的。
Microphone.devices.Length > 0
Microphone.Start
,要传入使用的设备名,是否环形循环录制缓存,录制缓存支持的时长,录制的采样率
Microphone.devices
里头其一至于声谱图,我没去研究过,这里不涉及。
public bool GetData(float[] data, int offsetSamples);
AudioClip是我们的音频对象。
而我们就是通过这个AudioClip.GetData来取对应缓存位置记录的音波数据的,然后根据这些数据我们将它用于控制一些粒子特效,或是线条,几何体,颜色变化,等来表现我们的音波图的。
下图就是根据此API制作的简单的音波图。(我放了很多个:RawImage,然后用AudioClip.GetData取出来的数据分别控制他们的宽度,及颜色即可,这首史诗级曲子:EI Dorado也是我非常喜欢的,有兴趣可以听听)
播放器,KTV,KTV评分系统:有些前面哪些功能,起始如果上对各音频文件的编解码处理,你甚至可以制作一些播放器了(在搞一些文件读写就好了),录制的话,你要可以搞个人KTV录制。还可以分析音波与原唱的音波图相似度来评KTV录制结果分数。
无时间限制录制文件功能:还有一个,起始还可以写一个无时间长度限制的录音软件。
但是还需要对代码调整(就是判断Microphone.GetPosition差不多到缓存尾部的时候,马上再调用Microphone.Start记录重新录制就好了,再将之前录制的那个AudioClip的数据写入文件流啊,就可以了)
简单语音录制的聊天系统:游戏,或是应用中的简单语音录制的聊天系统,前面讲了怎么怎么录制,怎么怎么从AudioClip拿数据,还有,下面的Code中,也有显示AudioClip.SetData怎么怎么使用,那么就可以将GetData的float[]转到byte[],在通过Socket的网络传输到服务器,服务器再广播,其他接受的客户端先将byte[]转float[],再就AudioClip.SetData,再AudioSource.Clip = 你的Clip,再AudioSource.Play,完事。
using System;
using System.Collections.Generic;
using Unity.Collections;
using UnityEngine;
using UnityEngine.UI;
///
/// author : jave.lin
/// date : 2020.02.18
/// 测试录制麦克风的各种功能
///
public class RecordMicrophoneSoundScript : MonoBehaviour
{
public enum RecordStatusType
{
Unstart,
Recording,
End,
}
public enum RecordErrorType
{
None,
NotFoundDevice,
RecordingError,
}
[Space(10)]
public Text playSoundWaveText;
public AudioClip music;
[Space(10)]
public Text startOrEndRecordBtnText;
public Text statusText;
public Text detailStatusText;
public Text deviceCaptureFreqText;
[Space(10)]
public Dropdown deviceDropdownList;
public Dropdown frequencyDropdownList;
public Slider recordTimeSlider;
public Text recordTimeText;
public Text playProgressText;
public Text recordTimeProgressText;
public Slider playProgressSlider;
public Slider recordTimeProgressSlider;
public Text detailRecordClipText;
[Space(10)]
public Color32 lineLowCol = new Color32(1, 1, 0, 1);
public Color32 lineHightCol = new Color32(1, 0, 0, 1);
public int lineW;
public RawImage[] lines;
[Space(10)]
[SerializeField] [ReadOnly] private RecordStatusType recordStatus;
[SerializeField] [ReadOnly] private RecordErrorType errorStatus;
[SerializeField] [ReadOnly] private AudioClip recordedClip;
private AudioSource audioSource; // 播放的source
private bool startRecord;
private int posWhenEnd;
private float[] bufferHelper;
[SerializeField] private float[] soundWaveBuffer;
[SerializeField] private float[] soundWaveBuffer2;
private void Awake()
{
audioSource = GetComponent<AudioSource>();
soundWaveBuffer = new float[lines.Length];
soundWaveBuffer2 = new float[lines.Length];
}
private void Start()
{
deviceDropdownList.ClearOptions();
deviceDropdownList.AddOptions(new List<string>(Microphone.devices));
deviceDropdownList.value = 0;
deviceDropdownList.RefreshShownValue();
for (int i = 0; i < lines.Length; i++) SetLine(i, 2, 1);
}
private void Update()
{
UpdateAllStatus();
}
private void SetLine(int idx, float multiplier = 2.0f, float t = 0.1f)
{
var line = lines[idx];
var v = line.rectTransform.sizeDelta;
var data = soundWaveBuffer[idx];
v.x = Mathf.Lerp(v.x, lineW * data * multiplier, t);
line.rectTransform.sizeDelta = v;
line.color = Color.Lerp(lineLowCol, lineHightCol, Mathf.Abs(data));
}
// 更新音波数据,这里我就不叫spectrum
private void UpdateSoundWaveData(AudioClip clip, int pos)
{
if (clip != null)
{
var offsetPos = pos - lines.Length;
if (offsetPos > 0) clip.GetData(soundWaveBuffer, offsetPos); // 如果clip当前录制位置,截取够lines.length的数量
else
{
// 如果不够
// 先截取尾部的作为起始数据
clip.GetData(soundWaveBuffer, clip.samples + offsetPos);
var delta = lines.Length + offsetPos;
if (delta > 0)
{
// 再截取补长的数据,再clip前面部分
// 因为录制时,我们开启了环形缓存
// public static AudioClip Start(string deviceName, bool loop, int lengthSec, int frequency)中的第二个参数决定,我们写死true
clip.GetData(soundWaveBuffer2, 0);
Array.Copy(soundWaveBuffer2, 0, soundWaveBuffer, -offsetPos, delta);
}
}
playSoundWaveText.text = $"PlaySoundWave : {clip.name}";
}
}
// 更新音波的可视线
private void UpdateSoundLines()
{
for (int i = 0; i < lines.Length; i++) SetLine(i);
}
// 更新一些状态
private void UpdateAllStatus()
{
if (Microphone.devices.Length == 0)
{
errorStatus = RecordErrorType.NotFoundDevice;
return;
}
else
{
var deviceName = deviceDropdownList.captionText.text;
Microphone.GetDeviceCaps(deviceName, out int minFreq, out int maxFreq);
deviceCaptureFreqText.text = $"Device Capture MinFreq:{minFreq} MaxFreq:{maxFreq}";
var isRecording = Microphone.IsRecording(deviceName);
if (isRecording)
{
recordStatus = RecordStatusType.Recording;
// 使用GetPosition获取写入缓存的位置与缓存大小的比例值,再乘以录制秒数,就可以得到准确的录制时长
var pos = Microphone.GetPosition(deviceName);
var recordProgress = recordedClip.length * ((float)pos / recordedClip.samples);
recordTimeProgressText.text = $"{recordProgress.ToString("00.0")}";
recordTimeProgressSlider.value = recordProgress;
// update sound wave ( not a spectrum )
UpdateSoundWaveData(recordedClip, pos); // 更新录制的音波数据
UpdateSoundLines(); // 更新音波的可视线
}
else
{
if (recordedClip == null)
{
recordStatus = RecordStatusType.Unstart;
}
else
{
recordStatus = RecordStatusType.End;
}
}
startOrEndRecordBtnText.text = startRecord ? "EndRecord" : "StartRecord";
var recordPostion = Microphone.GetPosition(deviceName);
// Debug.Log($"deviceName:{deviceName} isRecording:{isRecording} recordPosition:{recordPostion}");
detailStatusText.text = $"deviceName:{deviceName} isRecording:{isRecording} recordPosition:{recordPostion}";
statusText.text = $"Status : {recordStatus}, Error:{errorStatus}";
if (recordedClip != null)
{
detailRecordClipText.text = $"RecordClipInfo:len:{recordedClip.length},freq:{recordedClip.frequency},channels:{recordedClip.channels},samples:{recordedClip.samples},ambisonic:{recordedClip.ambisonic},loadType:{recordedClip.loadType},preloadAudioData:{recordedClip.preloadAudioData},loadInBg:{recordedClip.loadInBackground},loadState:{recordedClip.loadState}";
}
if (audioSource.clip != null)
{
//Debug.Log($"audioSource.timeSamples:{audioSource.timeSamples}");
playProgressSlider.minValue = 0;
playProgressSlider.maxValue = audioSource.clip.length;
playProgressSlider.value = audioSource.time;
playProgressText.text = $"{audioSource.time.ToString("00.0")}";
var percent = audioSource.time / audioSource.clip.length;
UpdateSoundWaveData(audioSource.clip, (int)(percent * audioSource.clip.samples)); // 更新播放的音波数据
UpdateSoundLines(); // 更新音波的可视线
}
else
{
playProgressSlider.value = playProgressSlider.minValue;
}
}
}
// 给DropDown控制绑定的On Value Change 的事件方法
public void OnDeviceItemSelectionChanged(int idx)
{
var selection = deviceDropdownList.options.Count > 0 ? deviceDropdownList.options[idx].text : "EMPTY";
Debug.Log($"Devices DropDownList Item Selection Changed, idx : {idx}, selection : {selection}");
}
public void OnStartOrEndRecordBtnClick()
{
if (Microphone.devices.Length == 0)
{
errorStatus = RecordErrorType.NotFoundDevice;
return;
}
var deviceName = deviceDropdownList.captionText.text;
var isRecording = Microphone.IsRecording(deviceName);
if (isRecording) // 录制中
{
posWhenEnd = Microphone.GetPosition(deviceName);
// 那么就取消录制 - cancel recording
Microphone.End(deviceName);
recordStatus = RecordStatusType.End;
startRecord = false;
}
else // 没在录制
{
if (recordedClip != null) Destroy(recordedClip);
// 人耳可以听到的声频为:20Hz~20000Hz左右
// 音频采样频率(即:离散的采样点组合的模拟声波震动频率(每秒多少次,单位名:赫兹:Hz))
// 经自己的测试发现:
// 8820 = 44100 / 5:和微信的录制频率差不多(甚至微信的更低,数据小啊)
// 11025 = 44100 / 4 ~ 44100:都差不多,不认真听都差不多
// 22050 = 44100 / 2
// 44100
// 88200 = 44100 * 2
// 频率越高,音质越高,但占用数据越大
var frequency = Convert.ToInt32(frequencyDropdownList.captionText.text);
recordedClip = Microphone.Start(deviceName, true, (int)recordTimeSlider.value, frequency);
startRecord = true;
recordTimeProgressSlider.minValue = 0;
recordTimeProgressSlider.maxValue = (int)recordTimeSlider.value;
recordTimeProgressSlider.value = 0;
if (recordedClip == null) errorStatus = RecordErrorType.RecordingError;
}
}
public void OnPlayClipBtnClick()
{
if (recordedClip != null)
{
var playingClip = AudioClip.Create("MicrophoneRecord", posWhenEnd, recordedClip.channels, recordedClip.frequency, recordedClip.loadType == AudioClipLoadType.Streaming);
if (bufferHelper == null || bufferHelper.Length < posWhenEnd) bufferHelper = new float[posWhenEnd];
recordedClip.GetData(bufferHelper, 0);
playingClip.SetData(bufferHelper, 0);
// 通过上面对playingClip的重新构建,playingClip.length是最精准的了就不是我们固定的长度了
audioSource.clip = playingClip;
audioSource.Play();
}
}
public void OnPauseBtnClick()
{
audioSource.Pause();
}
public void OnStopPlayClipBtnClick()
{
audioSource.Stop();
}
public void OnRecorddTimeSliderValueChanged(float value)
{
// recordTimeText.text = $"{value}s"; // 这里value一直未0有BUG
recordTimeText.text = $"{recordTimeSlider.value}s";
}
public void OnPlayMusicBtnClick()
{
audioSource.clip = music;
audioSource.Play();
}
}
RecordingMicrophoneSoundTesting_2019_3_1f1
Unity使用:2019.3.1f1,为何要讲这个,因为我讲2019.3.1f1降版本到2018.3.0f2的unity后,Scene里记录的Hierachy的对象绑定的各种Component的数据都丢失了。
本想用2019来制作测试,拿回2018发布到手机测试的,结果。。。。。。咳咳。。。(因为2019发布版本目前会卡Building Gradle Project的问题,官方网络限制中国区域的问题,需要,梯子,你才能正常发布,而现在VPN限制这么严,所以就没去用2019发布,暂时没那这个录制功能到手机端上测试)