The Python Oracle

How to add pandas data to an existing csv file?

--------------------------------------------------
Rise to the top 3% as a developer or hire one of them at Toptal: https://topt.al/25cXVn
--------------------------------------------------

Music by Eric Matyas
https://www.soundimage.org
Track title: Puzzle Game Looping

--

Chapters
00:00 How To Add Pandas Data To An Existing Csv File?
00:21 Accepted Answer Score 967
00:52 Answer 2 Score 282
01:34 Answer 3 Score 102
01:50 Answer 4 Score 25
02:14 Answer 5 Score 6
02:42 Thank you

--

Full question
https://stackoverflow.com/questions/1753...

--

Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...

--

Tags
#python #pandas #csv #dataframe

#avk47



ACCEPTED ANSWER

Score 985


You can specify a python write mode in the pandas to_csv function. For append it is 'a'.

In your case:

df.to_csv('my_csv.csv', mode='a', header=False)

The default mode is 'w'.

If the file initially might be missing, you can make sure the header is printed at the first write using this variation:

output_path='my_csv.csv'
df.to_csv(output_path, mode='a', header=not os.path.exists(output_path))



ANSWER 2

Score 281


You can append to a csv by opening the file in append mode:

with open('my_csv.csv', 'a') as f:
    df.to_csv(f, header=False)

If this was your csv, foo.csv:

,A,B,C
0,1,2,3
1,4,5,6

If you read that and then append, for example, df + 6:

In [1]: df = pd.read_csv('foo.csv', index_col=0)

In [2]: df
Out[2]:
   A  B  C
0  1  2  3
1  4  5  6

In [3]: df + 6
Out[3]:
    A   B   C
0   7   8   9
1  10  11  12

In [4]: with open('foo.csv', 'a') as f:
             (df + 6).to_csv(f, header=False)

foo.csv becomes:

,A,B,C
0,1,2,3
1,4,5,6
0,7,8,9
1,10,11,12



ANSWER 3

Score 25


A little helper function I use with some header checking safeguards to handle it all:

def appendDFToCSV_void(df, csvFilePath, sep=","):
    import os
    if not os.path.isfile(csvFilePath):
        df.to_csv(csvFilePath, mode='a', index=False, sep=sep)
    elif len(df.columns) != len(pd.read_csv(csvFilePath, nrows=1, sep=sep).columns):
        raise Exception("Columns do not match!! Dataframe has " + str(len(df.columns)) + " columns. CSV file has " + str(len(pd.read_csv(csvFilePath, nrows=1, sep=sep).columns)) + " columns.")
    elif not (df.columns == pd.read_csv(csvFilePath, nrows=1, sep=sep).columns).all():
        raise Exception("Columns and column order of dataframe and csv file do not match!!")
    else:
        df.to_csv(csvFilePath, mode='a', index=False, sep=sep, header=False)



ANSWER 4

Score 6


Initially starting with a pyspark dataframes - I got type conversion errors (when converting to pandas df's and then appending to csv) given the schema/column types in my pyspark dataframes

Solved the problem by forcing all columns in each df to be of type string and then appending this to csv as follows:

with open('testAppend.csv', 'a') as f:
    df2.toPandas().astype(str).to_csv(f, header=False)