MoviePy 安装 配置和应用

参考文献:1.https://blog.csdn.net/kd_2015/article/details/80157713MoviePy 安装 配置
2.http://hao.jobbole.com/moviepy/Moviepy:用脚本来编辑电影
3.https://github.com/priya-dwivedi/Deep-Learning/blob/master/Object_Detection_Tensorflow_API.ipynb Deep-Learning/Object_Detection_Tensorflow_API.ipynb
4.https://blog.csdn.net/ch97ckd/article/details/82700777使用TensorFlow进行训练识别视频图像中物体
Moviepy是一个Python模块,可以用来做基于脚本的电影编辑。它可以读写很多格式,包括GIF,并且支持一些基本操作如剪切、级联、标题插入等。
由于MoviePy某些功能要用到requests,但是目前直接用pip安装MoviePy时并不会自动帮你安装这个依赖包,还需要自己安装requests。所以安装MoviePy需要两行。
pip install MoviePy
pip install requests
安装完后,pip list查看一下,已安装的库必须包括:
Package Version
certifi 2018.4.16
chardet 3.0.4
decorator 4.3.0
idna 2.6
imageio 2.3.0
moviepy 0.2.3.4
numpy 1.14.3
Pillow 5.1.0
pip 10.0.1
requests 2.18.4
setuptools 39.1.0
tqdm 4.23.1
urllib3 1.22
安装ImageMagick
安装完上述功能,就已经可以使用MoviePy很多功能了,但是某些函数,还需要用到这个叫ImageMagick的软件。如果你不安装它会提示这样的错误:
Traceback (most recent call last):
File “F:\Tool\PythonVE\Movie\lib\site-packages\moviepy\video\VideoClip.py”, line 1156, in init
subprocess_call(cmd, verbose=False)
File “F:\Tool\PythonVE\Movie\lib\site-packages\moviepy\tools.py”, line 42, in subprocess_call
proc = sp.Popen(cmd, **popen_params)
File “f:\tool\python36\Lib\subprocess.py”, line 709, in init
restore_signals, start_new_session)
File “f:\tool\python36\Lib\subprocess.py”, line 997, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] 系统找不到指定的文件。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “F:/A/MoviePy/Cut.py”, line 14, in
txt_clip = TextClip(“字幕”, fontsize=70, color=‘white’)
File “F:\Tool\PythonVE\Movie\lib\site-packages\moviepy\video\VideoClip.py”, line 1165, in init
raise IOError(error)
OSError: MoviePy Error: creation of None failed because of the following error:

[WinError 2] 系统找不到指定的文件。
划重点:
.This error can be due to the fact that ImageMagick is not installed on your computer, or (for Windows users) that you didn’t specify the path to the ImageMagick binary in file conf.py, or that the path you specified is incorrect

我们需要自己下载,在官网 http://www.imagemagick.org/script/download.php 可以根据不同的操作系统来下载,Windows在最下面。
下载好后,选择自己喜欢的位置,来安装,疯狂点击Next就行,不需要配置环境变量。

配置ImageMagick
安装了还不够,还要让MoviePy能够找到它的位置。
进入到你的python根目录,打开MoviePy的配置环境变量的文件:\Python35\Lib\site-packages\moviepy\config_defaults.py
你会看到:
-import os
FFMPEG_BINARY = os.getenv(‘FFMPEG_BINARY’, ‘ffmpeg-imageio’)
IMAGEMAGICK_BINARY = os.getenv(‘IMAGEMAGICK_BINARY’, ‘auto-detect’)
按照它的提示改就好了。ffmpeg不用改,因为是MoviePy帮我们装的,它肯定知道它的位置了。主要是改IMAGEMAGICK_BINARY。把原来的注释掉,改成下面的样子:(当然你要根据你刚才安装的目录来选择)
IMAGEMAGICK_BINARY = r"C:\Program Files\ImageMagick-7.0.8-Q16\magick.exe"
现在,再次运行上面的程序,成功了!至此,MoviePy安装和配置都完成了!
示例test.py:
from moviepy.editor import *
clip = VideoFileClip(“video1.MP4”).subclip(3,10)
txt_clip = TextClip(“My Holidays 2013”,fontsize=70,color=‘white’)
txt_clip = txt_clip.set_pos(‘center’).set_duration(10)
final_clip = CompositeVideoClip([clip, txt_clip])
final_clip.to_videofile(“myHolidays_edited.MP4”)

此文件跟video1.MP4放在同一个目录下,我放在了models\research下面以防缺包。成功输出myHolidays_edited.MP4且打上了文字标签。

或者生成gif,如下:
from moviepy.editor import *
clip1 = VideoFileClip(“video1_out.mp4”)
clip1.write_gif(“final.gif”)

或者AVI,如下:
from moviepy.editor import *
clip = VideoFileClip(“myHolidays.mp4”).subclip(50,60)
txt_clip = TextClip(“My Holidays 2013”,fontsize=70,color=‘white’)
txt_clip = txt_clip.set_pos(‘center’).set_duration(10)
final_clip = CompositeVideoClip([clip, txt_clip])
final_clip.to_videofile(“myHolidays_edited.avi”,fps=25, codec=‘mpeg4’)

又如下同一视频多个物体分别识别后框出,最后合并的例子(此文件应该放在object_detection同级目录research内):
import os
import cv2
import time
import argparse
import multiprocessing
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
%matplotlib inline
from moviepy.editor import VideoFileClip
from IPython.display import HTML
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

CWD_PATH = os.getcwd()

Path to frozen detection graph. This is the actual model that is used for the object detection.

MODEL_NAME = ‘ssd_mobilenet_v1_coco_11_06_2017’
PATH_TO_CKPT = os.path.join(CWD_PATH, ‘object_detection’, MODEL_NAME, ‘frozen_inference_graph.pb’)

List of the strings that is used to add correct label for each box.

PATH_TO_LABELS = os.path.join(CWD_PATH, ‘object_detection’, ‘data’, ‘mscoco_label_map.pbtxt’)

#Load a frozen TF model
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, ‘rb’) as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name=’’)

def detect_objects(image_np, sess, detection_graph):
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
image_tensor = detection_graph.get_tensor_by_name(‘image_tensor:0’)
# Each box represents a part of the image where a particular object was detected.
boxes = detection_graph.get_tensor_by_name(‘detection_boxes:0’)
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
scores = detection_graph.get_tensor_by_name(‘detection_scores:0’)
classes = detection_graph.get_tensor_by_name(‘detection_classes:0’)
num_detections = detection_graph.get_tensor_by_name(‘num_detections:0’)
# Actual detection.
(boxes, scores, classes, num_detections) = sess.run(
[boxes, scores, classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
return image_np

def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# you should return the final output (image with lines are drawn on lanes)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
image_process = detect_objects(image, sess, detection_graph)
return image_process

white_output = ‘video1_out.mp4’
clip1 = VideoFileClip(“video1.mp4”).subclip(0,2)
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!s
%time white_clip.write_videofile(white_output, audio=False)

HTML("""

""".format(white_output))

white_output1 = ‘cars_out.mp4’
clip1 = VideoFileClip(“cars.mp4”).subclip(0,2)
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!s
%time white_clip.write_videofile(white_output1, audio=False)

HTML("""

""".format(white_output1))

white_output2 = ‘fruits1_out.mp4’
clip2 = VideoFileClip(“fruits1.mp4”).subclip(0,1)
white_clip = clip2.fl_image(process_image) #NOTE: this function expects color images!!s
%time white_clip.write_videofile(white_output2, audio=False)

HTML("""

""".format(white_output2))

white_output3 = ‘dog_out.mp4’
clip3 = VideoFileClip(“dog.mp4”).subclip(12,14)
white_clip = clip3.fl_image(process_image) #NOTE: this function expects color images!!s
%time white_clip.write_videofile(white_output3, audio=False)

HTML("""

""".format(white_output3))

Merge videos

from moviepy.editor import VideoFileClip, concatenate_videoclips
clip1 = VideoFileClip(“cars_out.mp4”)
clip2 = VideoFileClip(“fruits1_out.mp4”)
clip3 = VideoFileClip(“dog_out.mp4”)
final_clip = concatenate_videoclips([clip1,clip2,clip3], method=“compose”)
final_clip.write_videofile(“my_concatenation.mp4”,bitrate=“5000k”)

from moviepy.editor import *
clip = VideoFileClip(“my_concatenation.mp4”)
clip.write_gif(“final.gif”)

你可能感兴趣的:(MoviePy 安装 配置和应用)