Android音视频分离和合成

 

mp4音频是由视频和音频组成,Android 提供了 MediaExtractor 和 MediaMuxer 以及MediaFormat类,用来把音频或视频单独抽取出来,然后合成新的视频。下面分别介绍视频的分解和合成,效果如下:

 

Android音视频分离和合成_第1张图片

 

一、视频的分解

1、设置数据源获取音轨数据

        MediaExtractor extractor = new MediaExtractor();
        try {
            //设置数据源
            extractor.setDataSource(file.getAbsolutePath());
        } catch (IOException e) {
            e.printStackTrace();
        }
        //获取轨道数量
        trackCount = extractor.getTrackCount();

2、遍历音轨数量,得到想要的轨道

        //查找需要的视频轨道与音频轨道index
        for (int i = 0; i < trackCount; i++) {
            //遍历所以轨道
            MediaFormat itemMediaFormat = extractor.getTrackFormat(i);
            String itemMime = itemMediaFormat.getString(MediaFormat.KEY_MIME);
            if (itemMime.startsWith("video")) {
                //获取视频轨道位置
                videoTrackIndex = i;
                videoMediaFormat = itemMediaFormat;
                continue;
            }
            if (itemMime.startsWith("audio")) {
                //获取音频轨道位置
                audioTrackIndex = i;
                audioMediaFormat = itemMediaFormat;
                continue;
            }
        }

3、创建输出音视频文件夹

        File videoFile = new File(videoPath);
        File audioFile = new File(audioPath);
        if (videoFile.exists()) {
            videoFile.delete();
        }
        if (audioFile.exists()) {
            audioFile.delete();
        }

4、写入数据释放资源

   /**
     *
     * @param extractor 源数据
     * @param format    需要分离的音视频或者视频
     * @param path      视频或者音频输出路径
     * @param index     轨道下标
     * @throws IOException
     */
    private void muxerAudio(MediaExtractor extractor, MediaFormat format, String path, int index) throws IOException {
        MediaFormat trackFormat = extractor.getTrackFormat(index);
        MediaMuxer mediaMuxer = new MediaMuxer(path, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
        int trackIndex = mediaMuxer.addTrack(trackFormat);
        extractor.selectTrack(index);
        mediaMuxer.start();

        int maxVideoBufferCount = format.getInteger(MediaFormat.KEY_MAX_INPUT_SIZE);
        ByteBuffer byteBuffer = ByteBuffer.allocate(maxVideoBufferCount);
        MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();

        //时长:这里需要区分音频或者视频,两个是根据不同的计算方法,否则会导致分离的音频或者视频时长不一致或者崩溃
        long videoSampleTime = 0;
        try {
            videoSampleTime = getSampleTime(trackFormat);
        } catch (Exception e) {
            videoSampleTime = getSampleTime(extractor, byteBuffer);
        }
        while (true) {
            int readSampleDataSize = extractor.readSampleData(byteBuffer, 0);
            if (readSampleDataSize < 0) {
                break;
            }
            bufferInfo.size = readSampleDataSize;
            bufferInfo.offset = 0;
            bufferInfo.flags = extractor.getSampleFlags();
            bufferInfo.presentationTimeUs += videoSampleTime;
            mediaMuxer.writeSampleData(trackIndex, byteBuffer, bufferInfo);
            //该方法放在前面会导致首帧录屏
            extractor.advance();
        }
        //释放音轨
        extractor.unselectTrack(index);
        mediaMuxer.stop();
        //内部也会执行stop,所以可以不用执行stop
        mediaMuxer.release();
    }

附上两种时间的计算方法:

    /**
     * 通过帧率来计算:适用于视频
     */
    private long getSampleTime(MediaFormat mediaFormat) {
        //每秒多少帧
        int frameRate = mediaFormat.getInteger(MediaFormat.KEY_FRAME_RATE);
        //得出平均每一帧间隔多少微妙
        return 1000 * 1000 / frameRate;
    }

    /**
     * 通过设置PTS的办法:适用于音频,该方法使得视频播放变慢
     */
    private long getSampleTime(MediaExtractor audioExtractor, ByteBuffer buffer) {
        long videoSampleTime;
        audioExtractor.readSampleData(buffer, 0);
        //skip first I frame
        if (audioExtractor.getSampleFlags() == MediaExtractor.SAMPLE_FLAG_SYNC)
            audioExtractor.advance();
        audioExtractor.readSampleData(buffer, 0);
        long firstVideoPTS = audioExtractor.getSampleTime();
        audioExtractor.advance();
        audioExtractor.readSampleData(buffer, 0);
        long SecondVideoPTS = audioExtractor.getSampleTime();
        videoSampleTime = Math.abs(SecondVideoPTS - firstVideoPTS);
        return videoSampleTime;
    }

二、视频的合成

 视频和合成和上面比较类似,是分别设置数据源到MediaExtractor,然后获取对应的轨道数据,最后同步写入。

   /**
     * @param videoPath  源视频路径
     * @param audioPath  源音频路径
     * @param outPath    输出路径
     */
    private void startComposeTrack(String videoPath, String audioPath,String outPath) {
        try {
            MediaExtractor videoExtractor = new MediaExtractor();
            videoExtractor.setDataSource(videoPath);
            MediaExtractor audioExtractor = new MediaExtractor();
            audioExtractor.setDataSource(audioPath);
            MediaMuxer muxer = new MediaMuxer(outPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
            videoExtractor.selectTrack(0);
            MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
            int videoTrack = muxer.addTrack(videoFormat);
            audioExtractor.selectTrack(0);
            MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
            int audioTrack = muxer.addTrack(audioFormat);

            boolean sawEOS = false;
            int frameCount = 0;
            int offset = 100;
            int sampleSize = 256 * 1024;
            ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
            ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
            MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
            MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();
            videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
            audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
            muxer.start();
            while (!sawEOS) {
                videoBufferInfo.offset = offset;
                videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);
                if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) {
                    sawEOS = true;
                    videoBufferInfo.size = 0;
                } else {
                    videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
                    //noinspection WrongConstant
                    videoBufferInfo.flags = videoExtractor.getSampleFlags();
                    muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
                    videoExtractor.advance();
                    frameCount++;
                }
            }

            boolean sawEOS2 = false;
            int frameCount2 = 0;
            while (!sawEOS2) {
                frameCount2++;
                audioBufferInfo.offset = offset;
                audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);
                if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) {
                    sawEOS2 = true;
                    audioBufferInfo.size = 0;
                } else {
                    audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
                   
                    audioBufferInfo.flags = audioExtractor.getSampleFlags();
                    muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
                    audioExtractor.advance();
                }
            }
            muxer.stop();
            muxer.release();
            audioExtractor.release();
        } catch (IOException e) {
            e.printStackTrace();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

这里比较耗时,需要在子线程进行。

 

你可能感兴趣的:(媒体,音频,视频,合成,音视频分解)