4. Basic Recipes

4.1捕获到文件

将图像捕获到文件就像将文件名指定为所需的任何capture()方法的输出一样简单:
import time
import picamera

with picamera.PiCamera() as camera:
camera.resolution = (1024, 768)
camera.start_preview() #全屏显示
# Camera warm-up time
time.sleep(2)
camera.capture('foo.jpg') #保存图片

4.2捕获到流

将图像捕获到类文件对象(socket(),io.BytesIO流,现有打开文件对象等)就像将对象指定为您正在使用的任何capture()方法的输出一样简单:

import io
import time
import picamera

Create an in-memory stream

my_stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
# Camera warm-up time
time.sleep(2)
camera.capture(my_stream, 'jpeg')

请注意,在上面的情况中明确指定了格式。 BytesIO对象没有文件名,因此摄像机无法自动确定要使用的格式。

要记住的一点是(与指定文件名不同),捕获后流不会自动关闭; picamera假设由于它没有打开流,它也不能假定关闭它。但是,如果对象具有flush方法,则会在捕获返回之前调用此方法。这应该确保一旦捕获返回,其他进程可以访问数据,尽管仍然需要关闭对象

import time
import picamera

Explicitly open a new file called my_image.jpg

my_file = open('my_image.jpg', 'wb')
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
camera.capture(my_file)

At this point my_file.flush() has been called, but the file has

not yet been closed

my_file.close()

请注意,在上面的情况中,我们没有必要指定格式,因为相机询问了my_file对象的文件名(具体来说,它在提供的对象上查找名称属性)。除了使用Python内置的流类(如BytesIO),您还可以构建自己的自定义输出。

4.3。捕获到PIL图像

这是捕获到流的变体。首先,我们将图像捕获到BytesIO流(Python的内存中流类),然后我们将流的位置倒回到开始,并将流读取到PIL Image对象:

import io
import time
import picamera
from PIL import Image

Create the in-memory stream

stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
camera.capture(stream, format='jpeg')

"Rewind" the stream to the beginning so we can read its content

stream.seek(0)
image = Image.open(stream)

4.4
这是捕获到流的另一种变体。首先,我们将图像捕获到BytesIO流(Python的内存中流类),然后将流转换为numpy数组并使用OpenCV读取数组:

import io
import time
import picamera
import cv2
import numpy as np

Create the in-memory stream

stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
camera.capture(stream, format='jpeg')

Construct a numpy array from the stream

data = np.fromstring(stream.getvalue(), dtype=np.uint8)

"Decode" the image from the array, preserving colour

image = cv2.imdecode(data, 1)

OpenCV returns an array with data in BGR order. If you want RGB instead

use the following...

image = image[:, :, ::-1]

如果你想避免JPEG编码和解码(这是有损的)并可能加快这个过程,你现在可以使用picamera.ar​​ray模块中的类。由于OpenCV图像只是以BGR顺序排列的numpy数组,因此可以使用PiRGBArray类并使用'bgr'格式进行捕获(假设RGB和BGR数据的大小和配置相同,只需使用反色平面):

import time
import picamera
import picamera.array
import cv2

with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
with picamera.array.PiRGBArray(camera) as stream:
camera.capture(stream, format='bgr')
# At this point the image is available as stream.array
image = stream.array

4.5捕获已调整大小的图像

有时,特别是在对图像执行某种分析或处理的脚本中,您可能希望捕获比相机当前分辨率更小的图像。尽管可以使用诸如PIL或OpenCV之类的库来执行这样的调整大小,但是在捕获图像时让Pi的GPU执行调整大小是非常有效的。这可以使用capture()方法的resize参数完成:

import time
import picamera

with picamera.PiCamera() as camera:
camera.resolution = (1024, 768)
camera.start_preview()
# Camera warm-up time
time.sleep(2)
camera.capture('foo.jpg', resize=(320, 240))

使用start_recording()方法录制视频时,也可以指定resize参数。

4.6捕获一致的图像

您可能希望捕获一系列图像,所有这些图像在亮度,颜色和对比度方面看起来都相同(例如,这在游戏中时光倒流摄影中非常有用)。需要使用各种属性以确保多次拍摄的一致性。具体来说,您需要确保相机的曝光时间,白平衡和增益都是固定的:

要修复曝光时间,请将shutter_speed属性设置为合理的值。
要修复曝光增益,让analog_gain和digital_gain确定合理的值,然后将exposure_mode设置为'off'。
要修复白平衡,请将awb_mode设置为“off”,然后将awb_gains设置为(红色,蓝色)增益元组。
(可选)将iso设置为固定值。

可能很难知道这些属性可能具有哪些适当的值。对于iso,一个简单的经验法则是100和200是白天的合理值,而400和800对于弱光是更好的。要确定shutter_speed的合理值,您可以查询exposure_speed属性。对于曝光增益,通常等到analog_gain大于1(默认值,这将产生完全黑色的帧),然后将exposure_mode设置为“off”。最后,要确定awb_gains的合理值,只需查询属性,同时将awb_mode设置为“off”以外的其他值。同样,这将告诉您相机的白平衡增益,由自动白平衡算法决定。

以下脚本提供了配置这些设置的简要示例:

import time
import picamera

with picamera.PiCamera() as camera:
camera.resolution = (1280, 720)
camera.framerate = 30
# Wait for the automatic gain control to settle
time.sleep(2)
# Now fix the values
camera.shutter_speed = camera.exposure_speed
camera.exposure_mode = 'off'
g = camera.awb_gains
camera.awb_mode = 'off'
camera.awb_gains = g
# Finally, take several photos with the fixed settings
camera.capture_sequence(['image%02d.jpg' % i for i in range(10)]) //相当于连拍10张图片

4.7捕捉游戏中时光倒流序列

捕获长时间序列的最简单方法是使用capture_continuous()方法。使用此方法,相机会不断捕捉图像,直到您告诉它停止为止。图像自动赋予唯一名称,您可以轻松控制捕获之间的延迟。以下示例显示如何在每次拍摄之间延迟5分钟捕获图像:

import time
import picamera

with picamera.PiCamera() as camera:
camera.start_preview()
time.sleep(2)
for filename in camera.capture_continuous('img{counter:03d}.jpg'):
print('Captured %s' % filename)
time.sleep(300) # wait 5 minutes

但是,您可能希望在特定时间捕获图像,例如在每小时开始时捕获图像。这只需要改进循环中的延迟(datetime模块稍微更容易用于计算日期和时间;此示例还演示了捕获的文件名中的时间戳模板):

import time
import picamera
from datetime import datetime, timedelta

def wait():
# Calculate the delay to the start of the next hour
next_hour = (datetime.now() + timedelta(hour=1)).replace(
minute=0, second=0, microsecond=0)
delay = (next_hour - datetime.now()).seconds
time.sleep(delay)

with picamera.PiCamera() as camera:
camera.start_preview()
wait()
for filename in camera.capture_continuous('img{timestamp:%Y-%m-%d-%H-%M}.jpg'):
print('Captured %s' % filename)
wait()

4.8 在低光下捕捉

使用类似于捕捉一致图像的技巧,Pi的相机可以在低光条件下捕捉图像。主要目标是设置高增益和长曝光时间,以使相机尽可能多地聚集光线。但是,shutter_speed属性受相机帧速率的限制,因此我们需要做的第一件事就是设置一个非常慢的帧速率。以下脚本以6秒的曝光时间(Pi的相机模块当前能够达到的最大值)捕获图像:

import picamera
from time import sleep
from fractions import Fraction

with picamera.PiCamera() as camera:
camera.resolution = (1280, 720)
# Set a framerate of 1/6fps, then set shutter
# speed to 6s and ISO to 800
camera.framerate = Fraction(1, 6)
camera.shutter_speed = 6000000
camera.exposure_mode = 'off'
camera.iso = 800
# Give the camera a good long time to measure AWB
# (you may wish to use fixed AWB instead)
sleep(10)
# Finally, capture an image with a 6s exposure. Due
# to mode switching on the still port, this will take
# longer than 6 seconds
camera.capture('dark.jpg')

在除黑暗条件之外的任何其他情况下,此脚本生成的图像很可能是完全白色或至少严重过度曝光。

4.9。捕获到网络流

这是捕获游戏中时光倒流序列的变体。这里我们有两个脚本:一个服务器(可能在快速机器上),它监听来自Raspberry Pi的连接,一个客户端在Raspberry Pi上运行,并向服务器发送连续的图像流。我们将使用一个非常简单的通信协议:首先,图像的长度将以32位整数(Little Endian格式)发送,然后是图像数据的字节。如果长度为0,则表示应该关闭连接,因为不再有图像。该协议如下所示:


image.png

首先是服务器脚本(依赖于PIL读取JPEG,但您可以将其替换为任何其他合适的图形库,例如OpenCV或GraphicsMagick):

import io
import socket
import struct //格式,字节流
from PIL import Image

Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means

all interfaces)

server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)

Accept a single connection and make a file-like object out of it

connection = server_socket.accept()[0].makefile('rb')
try:
while True:
# Read the length of the image as a 32-bit unsigned int. If the
# length is zero, quit the loop
image_len = struct.unpack(' if not image_len:
break
# Construct a stream to hold the image data and read the image
# data from the connection
image_stream = io.BytesIO()
image_stream.write(connection.read(image_len))
# Rewind the stream, open it as an image with PIL and do some
# processing on it
image_stream.seek(0)
image = Image.open(image_stream)
print('Image is %dx%d' % image.size)
image.verify()
print('Image is verified')
finally:
connection.close()
server_socket.close()

现在,对于客户端而言,在Raspberry Pi上:
import io
import socket
import struct
import time
import picamera

Connect a client socket to my_server:8000 (change my_server to the

hostname of your server)

client_socket = socket.socket()
client_socket.connect(('my_server', 8000))

Make a file-like object out of the connection

connection = client_socket.makefile('wb')
try:
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
# Start a preview and let the camera warm up for 2 seconds
camera.start_preview()
time.sleep(2)

    # Note the start time and construct a stream to hold image data
    # temporarily (we could write it directly to connection but in this
    # case we want to find out the size of each capture first to keep
    # our protocol simple)
    start = time.time()
    stream = io.BytesIO()
    for foo in camera.capture_continuous(stream, 'jpeg'):
        # Write the length of the capture to the stream and flush to
        # ensure it actually gets sent
        connection.write(struct.pack(' 30:
            break
        # Reset the stream for the next capture
        stream.seek(0)
        stream.truncate()
# Write a length of zero to the stream to signal we're done
connection.write(struct.pack('

finally:
connection.close()
client_socket.close()

应首先运行服务器脚本,以确保有一个侦听套接字准备接受来自客户端脚本的连接。

4.10 将视频录制到文件

将视频录制到文件很简单:

import picamera

with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.start_recording('my_video.h264')
camera.wait_recording(60)
camera.stop_recording()

请注意,我们在上面的示例中使用wait_recording()而不是time.sleep(),我们一直在上面的图像捕获配方中使用它。 wait_recording()方法类似,它会暂停指定的秒数,但与time.sleep()不同,它会在等待时不断检查记录错误(例如磁盘空间不足)。如果我们使用time.sleep()代替,那么这些错误只会通过stop_recording()调用引发(在实际发生错误之后可能会很长)。

4.11 将视频录制到流中

这与将视频录制到文件非常相似:

import io
import picamera

stream = io.BytesIO()
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.start_recording(stream, format='h264', quality=23)
camera.wait_recording(15)
camera.stop_recording()

在这里,我们设置了质量参数,以向编码器指示我们希望尝试和维护的图像质量水平。相机的H.264编码器主要受两个参数约束:

比特率将编码器的输出限制为每秒一定的位数。默认值为17000000(17Mbps),最大值为25000000(25Mbps)。较高的值使编码器更“自由”地以更高的质量进行编码。除了较高的录制分辨率外,您可能会发现默认情况下不会限制编码器。
质量告诉编码器维持什么级别的图像质量。值可以在1(最高质量)和40(最低质量)之间,典型值在带宽和质量之间提供合理的折衷,在20到25之间。

As well as using stream classes built into Python (like BytesIO) you can also construct your own custom outputs. This is particularly useful for video recording, as discussed in the linked recipe.

4.12录制多个文件

如果您希望将记录分成多个文件,可以使用split_recording()方法来完成此操作:

import picamera

with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.start_recording('1.h264')
camera.wait_recording(5)
for i in range(2, 11):
camera.split_recording('%d.h264' % i)
camera.wait_recording(5)
camera.stop_recording()

这应该产生10个名为1.h264,2.h264等的视频文件,每个文件长约5秒(大约是因为split_recording()方法只会在关键帧上分割文件)。

record_sequence()方法也可用于通过稍微清晰的代码实现此目的:

import picamera

with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
for filename in camera.record_sequence(
'%d.h264' % i for i in range(1, 11)):
camera.wait_recording(5)

版本1.3中更改:版本1.3中引入了record_sequence()方法

4.13 录制到循环流

这类似于将视频录制到流中,但使用picamera库提供的特殊内存流。 PiCameraCircularIO类实现基于环形缓冲区的流,专门用于视频录制。这使您可以保留包含最后n秒视频记录的内存中的流(其中n由视频记录的比特率和流下面的环形缓冲区的大小确定)。

这种存储的典型用例是安全应用,其中人们希望检测运动并且仅将检测到运动的视频记录到磁盘。此示例在内存中保留20秒的视频,直到write_now函数返回True(在此实现中,这是随机的,但例如,可以用某种运动检测算法替换它)。一旦write_now返回True,脚本将等待10秒钟(以便缓冲区包含事件之前10秒的视频,之后10秒)并将结果视频写入磁盘,然后再返回等待:

import io
import random
import picamera

def write_now():
# Randomly return True (like a fake motion detection routine)
return random.randint(0, 10) == 0

def write_video(stream):
print('Writing video!')
with stream.lock:
# Find the first header frame in the video
for frame in stream.frames:
if frame.frame_type == picamera.PiVideoFrameType.sps_header:
stream.seek(frame.position)
break
# Write the rest of the stream to disk
with io.open('motion.h264', 'wb') as output:
output.write(stream.read())

with picamera.PiCamera() as camera:
stream = picamera.PiCameraCircularIO(camera, seconds=20)
camera.start_recording(stream, format='h264')
try:
while True:
camera.wait_recording(1)
if write_now():
# Keep recording for 10 seconds and only then write the
# stream to disk
camera.wait_recording(10)
write_video(stream)

finally:在上面的脚本中,我们使用lock属性中的线程锁来防止相机的后台写入线程在我们自己的线程从中读取时改变流(因为流是循环缓冲区,写入可以删除即将发生的信息)读)。如果我们在写入时停止了对流的记录,我们可以在write_video函数中消除with stream.lock行。
    camera.stop_recording()

4.14录制到网络流

这类似于将视频录制到流,但不是像BytesIO那样的内存中流,我们将使用从socket()创建的类文件对象。与捕获到网络流的示例不同,我们不需要通过编写图像长度等内容来使我们的网络协议复杂化。这次我们发送连续的视频帧流(必须包含这些信息,虽然效率更高),因此我们可以简单地将录音直接转储到网络套接字。

首先,服务器端脚本将简单地读取视频流并将其传送到媒体播放器进行显示:

import socket
import subprocess

Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means

all interfaces)

server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)

Accept a single connection and make a file-like object out of it

connection = server_socket.accept()[0].makefile('rb')
try:
# Run a viewer with an appropriate command line. Uncomment the mplayer
# version if you would prefer to use mplayer instead of VLC
cmdline = ['vlc', '--demux', 'h264', '-']
#cmdline = ['mplayer', '-fps', '25', '-cache', '1024', '-']
player = subprocess.Popen(cmdline, stdin=subprocess.PIPE)
while True:
# Repeatedly read 1k of data from the connection and write it to
# the media player's stdin
data = connection.read(1024)
if not data:
break
player.stdin.write(data)
finally:
connection.close()
server_socket.close()
player.terminate()

note:

如果在Windows上运行此脚本,则可能需要提供VLC或mplayer可执行文件的完整路径。如果您在Mac OS X上运行此脚本,并且正在使用从MacPorts安装的Python,请确保您还从MacPorts安装了VLC或mplayer。

您可能会注意到此设置有几秒钟的延迟。这是正常的,因为媒体播放器缓冲几秒钟以防止不可靠的网络流。某些媒体播放器(在这种情况下特别是mplayer)允许用户跳到缓冲区的末尾(按下mplayer中的右光标键),通过增加延迟/丢弃网络数据包将中断播放的风险来减少延迟。

现在,对于客户端脚本,它只是通过从网络套接字创建的类文件对象开始记录:

import socket
import time
import picamera

Connect a client socket to my_server:8000 (change my_server to the

hostname of your server)

client_socket = socket.socket()
client_socket.connect(('my_server', 8000))

Make a file-like object out of the connection

connection = client_socket.makefile('wb')
try:
with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24
# Start a preview and let the camera warm up for 2 seconds
camera.start_preview()
time.sleep(2)
# Start recording, sending the output to the connection for 60
# seconds, then stop
camera.start_recording(connection, format='h264')
camera.wait_recording(60)
camera.stop_recording()
finally:
connection.close()
client_socket.close()

还应该注意的是,使用netcat和raspivid可执行文件的组合可以更容易地实现上述的效果(至少在Linux上)。例如:
server-side: nc -l 8000 | vlc --demux h264 -
client-side: raspivid -w 640 -h 480 -t 60000 -o - | nc my_server 8000

但是,这个配方确实可以作为视频流应用的起点。也可以相对容易地反转该配方的方向。在这种情况下,Pi充当服务器,等待来自客户端的连接。当它接受连接时,它开始在其上流式传输视频60秒。另一个变体(仅用于演示目的)是我们直接初始化摄像机而不是等待连接以允许流连接在连接时更快启动:

import socket
import time
import picamera

with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24

server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)

# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('wb')
try:
    camera.start_recording(connection, format='h264')
    camera.wait_recording(60)
    camera.stop_recording()
finally:
    connection.close()
    server_socket.close()

这种设置的一个优点是客户端不需要脚本 - 我们可以简单地使用带有网络URL的VLC:
vlc tcp/h264://my_pi_address:8000/

Note:
VLC(或mplayer)无法在Pi上播放。 (目前)也没有能力使用GPU进行解码,因此他们尝试在Pi的CPU上执行视频解码(这对于任务而言不够强大)。您需要在更快的机器上运行这些应用程序(虽然“更快”是这里的相对术语:即使是Atom驱动的上网本也应该足够快以完成非HD分辨率的任务)。

4.15在预览上叠加图像

相机预览系统可以同时操作多个分层渲染器。虽然picamera库只允许单个渲染器连接到摄像机的预览端口,但它确实允许创建显示静态图像的其他渲染器。这些重叠的渲染器可用于创建简单的用户界面。

note:
叠加图像不会出现在图像捕获或视频录制中。如果需要在摄像机输出中嵌入其他信息,请参阅输出中的覆盖文本。

使用叠加渲染器的一个难点是它们期望未编码的RGB输入被填充到摄像机的块大小。摄像机的块大小为32x16,因此提供给渲染器的任何图像数据的宽度必须是32的倍数,高度是16的倍数。预期的特定RGB格式是交错的无符号字节。如果这一切听起来很复杂,请不要担心;在实践中生产非常简单。

下面的示例演示如何使用PIL加载任意大小的图像,将其填充到所需大小,以及为add_overlay()调用生成未编码的RGB数据:

import picamera
from PIL import Image
from time import sleep

with picamera.PiCamera() as camera:
camera.resolution = (1280, 720)
camera.framerate = 24
camera.start_preview()

# Load the arbitrarily sized image
img = Image.open('overlay.png')
# Create an image padded to the required size with
# mode 'RGB'
pad = Image.new('RGB', (
    ((img.size[0] + 31) // 32) * 32,
    ((img.size[1] + 15) // 16) * 16,
    ))
# Paste the original image into the padded one
pad.paste(img, (0, 0))

# Add the overlay with the padded image as the source,
# but the original image's dimensions
o = camera.add_overlay(pad.tostring(), size=img.size)
# By default, the overlay is in layer 0, beneath the
# preview (which defaults to layer 2). Here we make
# the new overlay semi-transparent, then move it above
# the preview
o.alpha = 128
o.layer = 3

# Wait indefinitely until the user terminates the script
while True:
    sleep(1)

或者,您可以直接从numpy数组生成叠加层,而不是使用图像文件作为源。在下面的示例中,我们构造一个与屏幕具有相同分辨率的numpy数组,然后在中心绘制一个白色十字,并将其作为简单的十字线覆盖在预览上:

import time
import picamera
import numpy as np

Create an array representing a 1280x720 image of

a cross through the center of the display. The shape of

the array must be of the form (height, width, color)

a = np.zeros((720, 1280, 3), dtype=np.uint8)
a[360, :, :] = 0xff
a[:, 640, :] = 0xff

with picamera.PiCamera() as camera:
camera.resolution = (1280, 720)
camera.framerate = 24
camera.start_preview()
# Add the overlay directly into layer 3 with transparency;
# we can omit the size parameter of add_overlay as the
# size is the same as the camera's resolution
o = camera.add_overlay(np.getbuffer(a), layer=3, alpha=64)
try:
# Wait indefinitely until the user terminates the script
while True:
time.sleep(1)
finally:
camera.remove_overlay(o)

鉴于可以隐藏重叠的渲染器(通过将它们移动到默认为2的预览图层下方),制作半透明(具有alpha属性),并调整大小以使它们不填充屏幕,它们可用于构造简单的用户界面。

New in version 1.8.

4.16在输出上叠加文本

该摄像机包括一个基本的注释工具,允许在所有输出上覆盖多达255个字符的ASCII文本(包括预览,图像捕获和视频录制)。要实现此目的,只需将一个字符串分配给annotate_text属性:

import picamera
import time

with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24
camera.start_preview()
camera.annotate_text = 'Hello world!'
time.sleep(2)
# Take a picture including the annotation
camera.capture('foo.jpg')

凭借一点点聪明才智,可以显示更长的字符串:

import picamera
import time
import itertools

s = "This message would be far too long to display normally..."

with picamera.PiCamera() as camera:
camera.resolution = (640, 480)
camera.framerate = 24
camera.start_preview()
camera.annotate_text = ' ' * 31
for c in itertools.cycle(s):
camera.annotate_text = camera.annotate_text[1:31] + c
time.sleep(0.1)

当然,它可用于在录制中显示(和嵌入)时间戳(此配方还演示了在时间戳后面绘制背景以与annotate_background属性进行对比):

import picamera
import datetime as dt

with picamera.PiCamera() as camera:
camera.resolution = (1280, 720)
camera.framerate = 24
camera.start_preview()
camera.annotate_background = picamera.Color('black')
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
camera.start_recording('timestamped.h264')
start = dt.datetime.now()
while (dt.datetime.now() - start).seconds < 30:
camera.annotate_text = dt.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
camera.wait_recording(0.2)
camera.stop_recording()

New in version 1.7.

4.17 控制led

在某些情况下,您可能会发现相机模块的红色LED是一个障碍。例如,在自动特写野生动物摄影的情况下,LED可能会吓跑动物。它还可能会导致不需要的反射红色眩光与特写主体。

解决这个问题的一个简单方法就是在LED上放置一些不透明的覆盖物(例如蓝色胶带或电工胶带)。另一种方法是在引导配置中使用disable_camera_led选项。

但是,如果您安装了RPi.GPIO程序包,并且如果您的Python进程正在以足够的权限运行(通常这意味着使用sudo python以root身份运行),您还可以通过led属性控制LED:

import picamera

with picamera.PiCamera() as camera:
# Turn the camera's LED off
camera.led = False
# Take a picture while the LED remains off
camera.capture('foo.jpg')

你可能感兴趣的:(4. Basic Recipes)