Efficiently read big csv file by parts using Dask
Become part of the top 3% of the developers by applying to Toptal https://topt.al/25cXVn
--
Music by Eric Matyas
https://www.soundimage.org
Track title: Hypnotic Puzzle4
--
Chapters
00:00 Question
01:31 Accepted answer (Score 3)
02:02 Thank you
--
Full question
https://stackoverflow.com/questions/6073...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #csv #dask #daskdataframe
#avk47
--
Music by Eric Matyas
https://www.soundimage.org
Track title: Hypnotic Puzzle4
--
Chapters
00:00 Question
01:31 Accepted answer (Score 3)
02:02 Thank you
--
Full question
https://stackoverflow.com/questions/6073...
--
Content licensed under CC BY-SA
https://meta.stackexchange.com/help/lice...
--
Tags
#python #csv #dask #daskdataframe
#avk47
ACCEPTED ANSWER
Score 4
Dask dataframe will partition the data for you, you don't need to use nrows/skip_rows
df = dd.read_csv(filename)
If you want to pick out a particular partition then you could use the partitions accessor
part = df.partitions[i]
However, you might also want to apply your functions in parallel.
df.map_partitions(process).to_csv("data.*.csv")