Deciding about De/Serialization in PySpark Storage Levels

Serialization can save substantial space at the cost of some extra CPU time — and by default, PySpark uses the cPickle serializer. (The following link explains the general internal design of PySpark: PySpark_Internals.)

Prior to PySpark 2.0, the stored objects were always serialized regardless of whether you chose a serialized level. That means, the flag “deserialized” had no effect (as documented below: StorageLevel_configuration.) For that reason, in the StorageLevel options, the following option pairs had the same effects:

  • MEMORY_AND_DISK, MEMORY_AND_DISK_SER MEMORY_AND_DISK_2
    • MEMORYANDDISKSER2 MEMORY_ONLY*
    • MEMORYONLYSER MEMORYONLY2 *
  • MEMORY_ONLY_SER_2

In the latest master branch (after the release of 1.6.x), the changes in Pull Request #10092 reduce the number of exposed storage levels in PySpark. The following levels have been proposed for deprecation:

  • MEMORY_AND_DISK_SER
  • MEMORYANDDISKSER2*
  • MEMORYONLYSER*
  • MEMORYONLYSER_2*

Note that all the actual available storage levels in Python include MEMORYONLY, MEMORYONLY2, MEMORYANDDISK, MEMORYANDDISK2, DISKONLY, DISKONLY2 and OFFHEAP. All these remaining options set “deserialized” to false.

Biography: Dr. Xiao Li is an active Apache Spark committer from IBM Spark Technology Center. His main interests are on Spark, data replication and data integration. He received his Ph.D. from University of Florida in 2011. Xiao has over eight papers and eight patent applications in the field of data management.

Spark Technology Center

Newsletter

Subscribe to the Spark Technology Center newsletter for the latest thought leadership in Apache Spark™, machine learning and open source.

Subscribe

Newsletter

You Might Also Enjoy