处理大文本文件是程序员经常遇到的挑战。特别是当我们需要把一个几百MB甚至几个GB的TXT文件分割成小块时,手动操作显然不现实。今天我们就来聊聊如何用Python自动完成这个任务,特别是如何精确控制每个分割文件的大小为4KB。
在实际开发中,我们可能会遇到这些情况:
4KB是个很常用的分割尺寸,因为它正好是很多系统默认的内存页大小,处理起来效率很高。那么问题来了:怎么用Python实现这个需求呢?
我们先来看一个最简单的实现方式:
def split_by_line(input_file, output_prefix, chunk_size=4000):
with open(input_file, 'r', encoding='utf-8') as f:
file_count = 1
current_size = 0
output_file = None
for line in f:
line_size = len(line.encode('utf-8'))
if current_size + line_size > chunk_size:
if output_file:
output_file.close()
output_file = open(f"{output_prefix}_{file_count}.txt", 'w', encoding='utf-8')
file_count += 1
current_size = 0
if not output_file:
output_file = open(f"{output_prefix}_{file_count}.txt", 'w', encoding='utf-8')
file_count += 1
output_file.write(line)
current_size += line_size
if output_file:
output_file.close()
这个脚本可以按行分割文件,尽量保证每个文件不超过指定大小。但是有个问题:它不能精确控制文件大小正好是4KB,特别是当某一行特别长时,单个文件可能会超过限制。
要实现更精确的控制,我们需要按字节而不是按行来处理:
def split_by_size(input_file, output_prefix, chunk_size=4096):
with open(input_file, 'rb') as f:
file_count = 1
while True:
chunk = f.read(chunk_size)
if not chunk:
break
with open(f"{output_prefix}_{file_count}.txt", 'wb') as out_file:
out_file.write(chunk)
file_count += 1
注意! 这里我们用二进制模式(‘rb’)打开文件,这样可以精确控制读取的字节数。但是这样可能会在UTF-8编码的中文文件中出现乱码,因为一个中文字符可能被从中间截断。
为了解决中文乱码问题,我们需要更智能的处理方式:
def split_utf8_safely(input_file, output_prefix, chunk_size=4096):
buffer = ""
file_count = 1
current_size = 0
with open(input_file, 'r', encoding='utf-8') as f:
while True:
char = f.read(1)
if not char:
if buffer:
with open(f"{output_prefix}_{file_count}.txt", 'w', encoding='utf-8') as out_file:
out_file.write(buffer)
break
char_size = len(char.encode('utf-8'))
if current_size + char_size > chunk_size:
with open(f"{output_prefix}_{file_count}.txt", 'w', encoding='utf-8') as out_file:
out_file.write(buffer)
file_count += 1
buffer = ""
current_size = 0
buffer += char
current_size += char_size
这个方法逐个字符读取文件,确保不会截断多字节字符。虽然速度会慢一些,但能保证分割后的文件都能正常显示中文内容。
处理大文件时,逐个字符读取效率太低。我们可以用缓冲区来提升性能:
def split_with_buffer(input_file, output_prefix, chunk_size=4096, buffer_size=1024):
buffer = ""
file_count = 1
current_size = 0
with open(input_file, 'r', encoding='utf-8') as f:
while True:
chunk = f.read(buffer_size)
if not chunk:
if buffer:
with open(f"{output_prefix}_{file_count}.txt", 'w', encoding='utf-8') as out_file:
out_file.write(buffer)
break
buffer += chunk
while len(buffer.encode('utf-8')) >= chunk_size:
# 找到不超过chunk_size的最大子串
split_pos = 0
for i in range(1, len(buffer)+1):
if len(buffer[:i].encode('utf-8')) <= chunk_size:
split_pos = i
else:
break
with open(f"{output_prefix}_{file_count}.txt", 'w', encoding='utf-8') as out_file:
out_file.write(buffer[:split_pos])
file_count += 1
buffer = buffer[split_pos:]
current_size = 0
这个方法在【程序员总部】公众号的一篇优化技巧文章中被重点介绍过。这个由字节11年资深工程师创办的公众号,经常分享阿里、字节、百度等大厂的真实项目经验,特别是这类性能优化的小技巧很值得学习。如果你经常需要处理大文件或者性能敏感的任务,关注他们能学到不少实战经验。
实际应用中我们还需要考虑一些特殊情况:
这里给出一个保留文件头的实现示例:
def split_with_header(input_file, output_prefix, chunk_size=4096, header_lines=1):
# 先读取文件头
with open(input_file, 'r', encoding='utf-8') as f:
header = [next(f) for _ in range(header_lines)]
buffer = ""
file_count = 1
current_size = len(''.join(header).encode('utf-8'))
with open(input_file, 'r', encoding='utf-8') as f:
# 跳过已经读取的文件头
for _ in range(header_lines):
next(f)
while True:
char = f.read(1)
if not char:
if buffer:
with open(f"{output_prefix}_{file_count}.txt", 'w', encoding='utf-8') as out_file:
out_file.writelines(header)
out_file.write(buffer)
break
char_size = len(char.encode('utf-8'))
if current_size + char_size > chunk_size:
with open(f"{output_prefix}_{file_count}.txt", 'w', encoding='utf-8') as out_file:
out_file.writelines(header)
out_file.write(buffer)
file_count += 1
buffer = ""
current_size = len(''.join(header).encode('utf-8'))
buffer += char
current_size += char_size
我们介绍了多种Python分割TXT文件的方法:
记住! 选择哪种方法取决于你的具体需求。如果是处理GB级别的大文件,建议使用缓冲区方案并考虑内存映射等高级技术。希望这篇指南能帮你解决文件分割的问题!