最近看了下自己的博文,发现虽然有些关于文件的操作,但是没有关于超过2G的文件操作。搜了一些别人写的博文,贴出来以后备用,自己就懒得写了。一般自己用也就用with open() as F ,后面操作F, 文件对象F视为一个迭代器,会自动的采用缓冲IO
和内存管理,方便快捷。下面是一些简单的方法
1.看到文件这么大,我们的第一反应都是把文件分割成小块的读取不就好了吗
def read_in_chunks(filePath, chunk_size=1024*1024): """ Lazy function (generator) to read a file piece by piece. Default chunk size: 1M You can set your own chunk size """ file_object = open(filePath) while True: chunk_data = file_object.read(chunk_size) if not chunk_data: break yield chunk_data if __name__ == "__main__": filePath = './path/filename' for chunk in read_in_chunks(filePath): process(chunk) # <do something with chunk>
2.使用with open()
#If the file is line based with open(...) as f: for line in f: process(line) # <do something with line>
3.fileinput处理
import fileinput for line in fileinput.input(['sum.log']): print line
下面是参考链接:
https://www.cnblogs.com/yu-zhang/p/5949696.html