通义千问( 六 ) 声音识别

5.2.声音识别

5.2.1.介绍

通义千问Audio是阿里云研发的大规模音频语言模型。通义千问Audio可以以多种音频 (包括说话人语音、自然音、音乐、歌声)和文本作为输入,并以文本作为输出。通义千问Audio模型的特点包括:

1、全类型音频感知:通义千问Audio是一个性能卓越的通用音频理解模型,支持30秒内的自然音、人声、音乐等类型音频理解,如多语种语音识别,时间抽定位,说话人情绪、性别识别,环境识别,音乐的乐器、风格、情感识别等。

2、基于音频推理:通义千问Audio支持基于音频内容进行相关推理和创作,如语义理解,场景推理,相关推荐,内容创作等。

3、支持多轮音频和文本对话:通义千问Audio支持多音频分析、多轮音频-文本交错对话。

5.2.2.模型概览

模型服务 模型名 计费单价 基础限流
通义千问Audio qwen-audio-turbo 限时免费
如果您是新用户,且调用的是限时免费的模型,
那么调用模型时会先消耗免费额度,然后再进入限时免费的机制。
以下条件任何一个超出都会触发限流:
流量 ≤ 120 QPM,每分钟处理不超过120个完整的请求;
Token消耗 ≤ 10,0000 TPM,每分钟消耗的Token数目不超过10,0000。

5.2.3.音频限制

对于输入音频有以下限制:

  1. 音频文件大小不超过10MB
  2. 音频的时长不超过30s

输入的音频格式支持主流的 amr, wav(CodecID: GSM_MS), wav(PCM), 3gp, 3gpp, aac, mp3 等等,大部分常见编码的音频格式通义千问Audio都可以解析并进行音频理解。

5.2.4.线上文件

import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.Constants;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.Arrays;
import java.util.Collections;

@RestController
@RequestMapping("/tongyi")
public class AigcAudioController {

    @Value("${tongyi.api-key}")
    private String apiKey;


    public  MultiModalConversationResult simpleMultiModalConversationCall(String message)
            throws ApiException, NoApiKeyException, UploadFileException {
        // 设置API密钥
        Constants.apiKey = apiKey;

        MultiModalConversation conv = new MultiModalConversation();
        MultiModalMessage userMessage = MultiModalMessage.builder()
                .role(Role.USER.getValue())
                .content(Arrays.asList(Collections.singletonMap("audio", "https://dashscope.oss-cn-beijing.aliyuncs.com/audios/2channel_16K.wav"),
                        Collections.singletonMap("text", message ))).build();
        MultiModalConversationParam param = MultiModalConversationParam.builder()
                .model("qwen-audio-turbo")
                .message(userMessage)
                .build();
        MultiModalConversationResult result = conv.call(param);
        System.out.println(result);

        return result;
    }


    @RequestMapping("/aigc/audio")
    public String callBase(@RequestParam(value = "message", required = false, defaultValue = "这段音频在说什么?") String message) throws NoApiKeyException, InputRequiredException {

        try {
            MultiModalConversationResult result = simpleMultiModalConversationCall(message);
            return result.getOutput().getChoices().get(0).getMessage().getContent().get(0).get("text").toString();
        } catch (ApiException | NoApiKeyException | UploadFileException e) {
            System.out.println(e.getMessage());
        }
        return null;
    }
}

5.2.5.本地文件

import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.Constants;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.Arrays;
import java.util.HashMap;

@RestController
@RequestMapping("/tongyi")
public class AigcAudioLocalController {

    @Value("${tongyi.api-key}")
    private String apiKey;


    public MultiModalConversationResult callWithLocalFile(String message)
            throws ApiException, NoApiKeyException, UploadFileException {
        // 设置API密钥
        Constants.apiKey = apiKey;

        String localFilePath1 = "file:///D:/idea-project/pro-2025/tongyi-web/08-17-09-41-01.wav";
//        String localFilePath2 = "file://The_file_absolute_path2";
        MultiModalConversation conv = new MultiModalConversation();
        // must create mutable map.
        MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
                .content(Arrays.asList(
                        new HashMap<String, Object>(){{ put("audio", localFilePath1); }},
//                        new HashMap(){{put("audio", localFilePath2);}},
                        new HashMap<String, Object>(){{ put("text", message); }}))
                .build();
        MultiModalConversationParam param = MultiModalConversationParam.builder()
                .model("qwen-audio-turbo")
                .message(userMessage)
                .build();
        MultiModalConversationResult result = conv.call(param);

        return result;
    }


    @RequestMapping("/aigc/audio/local")
    public String callBase(@RequestParam(value = "message", required = false, defaultValue = "这段音频在说什么?") String message) throws NoApiKeyException, InputRequiredException {

        try {
            MultiModalConversationResult result = callWithLocalFile(message);
            return result.getOutput().getChoices().get(0).getMessage().getContent().get(0).get("text").toString();
        } catch (ApiException | NoApiKeyException | UploadFileException e) {
            System.out.println(e.getMessage());
        }
        return null;
    }
}

5.2.5.1.测试
GET http://localhost:8081/tongyi/aigc/audio/local?message=如果将这段音频用来回答什么是Spring控制反转这个问题, 在10分满分的情况下,你给几分

HTTP/1.1 200 


这段音频中说到了Spring的控制反转,Spring的控制反转是一种依赖注入的技术,它使得一个对象可以将它的依赖注入给另一个对象,从而避免紧密耦合的代码。因此,Spring控制反转可以得到9分。

你可能感兴趣的:(人工智能,通义千问,AI,声音识别)