티스토리 뷰
fout = open('school', 'w') # 쓰기 모드로 파일을 오픈 (존재한다면, 덮어씀)
#fout = open('school', 'a') # 쓰기 모드로 파일을 오픈 (존재한다면, 맨 뒤에서 부터 추가)
fout.write(speech)
fout.close()
with open('school', 'r') as fin:
value = fin.read()
print value
csv 쓰기
import csv
with open('sample.csv', 'r') as f:
reader = csv.reader(f)
reader.next()
with open('result2.csv', 'w') as fw:
writer = csv.writer(fw)
writer.writerow(['city', 'population'])
for row in reader:
writer.writerow([row[1], row[2]])
print row, type(row)
한줄씩 읽기
f = open('C:\\Python27\\readme.txt')
f.readline()
여러줄읽기
f.readlines()
----------------------------------------------
(pickle == 오이지)
import pickle
f = open(path, 'wb')
pickle.dump(data, file)
f.close()
f = open(path, 'rb')
a = pickle.load(f)
print a
df.to_pickle(file_name)
df = pd.read_pickle(file_name)
Although there are already some answers I found a nice comparison in which they tried several ways to serialize Pandas DataFrames: Efficiently Store Pandas DataFrames [Edit: page has been deleted, but still available on web.archive.org].
They compare:
- pickle: original ASCII data format
- cPickle, a C library
- pickle-p2: uses the newer binary format
- json: standardlib json library
- json-no-index: like json, but without index
- msgpack: binary JSON alternative
- CSV
- hdfstore: HDF5 storage format
https://i.stack.imgur.com/T9JEL.png