Nico Kruber walks us through the viable set of serializers in Apache Flink:
Flink handles data types and serialization with its own type descriptors, generic type extraction, and type serialization framework. We recommend reading through the documentation first in order to be able to follow the arguments we present below. In essence, Flink tries to infer information about your job’s data types for wire and state serialization, and to be able to use grouping, joining, and aggregation operations by referring to individual field names, e.g.
stream.keyBy(“ruleId”)
ordataSet.join(another).where("name").equalTo("personName")
. It also allows optimizations in the serialization format as well as reducing unnecessary de/serializations (mainly in certain Batch operations as well as in the SQL/Table APIs).
Click through for notes on each serializer and a graph which shows how the choice of a serializer can make a huge difference.
Comments closed